Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-brian-goetz-on-java-futures-at-fosdem-2019
Prasad Ramesh
11 Feb 2019
3 min read
Save for later

Brian Goetz on Java futures at FOSDEM 2019

Prasad Ramesh
11 Feb 2019
3 min read
At FOSDEM 2019, Java language architect Brian Goetz talks about the future of Java in the next few years. Let’s take a look at the highlights of his talk. Java has been around more than 20 years and has been declared dead by critics numerous times. They have kept the language alive by staying relevant to problems and hardware. The faster release cycle allows the Java team to work on good small features. It also helps laying the groundwork for future releases. Preview features help in risk reduction by gathering feedback. There are various projects in the works that will allow Java to adapt to higher expectations of the developers, bring value types, generic specialization, and better interoperability with native code. Switch and pattern matching He goes over the new switch statements in Java 12 with an example. The new expression switch simplifies the code a lot, by allowing not just lesser typing but also makes the code less error prone. Switch expressions is a part of a larger concept called pattern matching. It combines a test, a conditional extraction and a binding into one operation. The code looks cleaner with pattern matching as it eliminates redundant boilerplate. Project Valhalla The goal of this project is to reboot the way JVM lays out data in memory. The hardware has changed drastically in the last three decades. The cost of memory fetch versus arithmetic has increased hundreds of times. Memory efficiency is lost in between this increased cost. Alternatives like stuffing data into arrays under a single header makes the code worse. This project introduces value types that ‘codes like a class, works like an int’. Project Valhalla has been running for the past 5 years and has different phases. The current prototype is called LW1 and has the VM underpinnings validated. The next prototype called LW2 out next year should be good for experimentation. Project metropolis is also in the early stages. It's about replacing the C2 compiler with the Graal compiler. The Java team is working on a lot of features across various categories such as language productivity features, fundamental VM performance features, native interop, concurrency models etc,. They are starting to be better formed now after years of work through various projects. The bi-yearly releases help test more features than before and the limited LTS releases would help the core developers to work with better focus. Project Valhalla seems promising and could possible make Java much more memory efficient. To see code demo and explanation with QnA, you can watch the talk. 7 things Java programmers need to watch for in 2019 Netflix adopts Spring Boot as its core Java framework IntelliJ IDEA 2018.3 is out with support for Java 12, accessibility improvements, GitHub pull requests, and more
Read more
  • 0
  • 0
  • 15225

article-image-github-introduces-project-paper-cuts-for-developers-to-fix-small-workflow-problems-iterate-on-ui-ux-and-find-other-ways-to-make-quick-improvements
Melisha Dsouza
29 Aug 2018
4 min read
Save for later

Github introduces Project Paper Cuts for developers to fix small workflow problems, iterate on UI/UX, and find other ways to make quick improvements

Melisha Dsouza
29 Aug 2018
4 min read
Github has introduced “Project Paper Cuts” that was inspired from a lot of refined GitHub additions. This project aims to fix smaller code related and UI issues that users face during a project development workflow.   Source: Twitter Project Paper Cuts is committed to working directly with the community in order to fix small to medium-sized workflow problems. It aims to improve UI/UX and find ways to make quick improvements to nagging issues that users often encounter in their projects. The project aims to find fixes on issues that have the most impact but are supported with hardly any or no discussions. Most “paper cuts” will have a public changelog entry associated with them so users can keep pace. The few “lesser talked issues” that GitHub has already managed to solve are: #1 Unselect markers when copying and pasting the contents of a diff The + and - diff markers are no longer copied to the clipboard when users copy the contents of a diff. #2 Edit a repository’s README from the repository root If a user has the permission to push to a repository,  they can edit a README file from the repository root by clicking the pen icon to the right of the README’s file header. #3 Users can access their repositories straight from the profile dropdown Users can use the profile dropdown, on any page, to quickly go straight to the “Your Repositories” tab within their user profile. #4 Highlight permalinked comments When following a permalink to a specific comment in an issue or pull request, the comment will be highlighted so that a user can easily find it among other comments in the thread. #5 Remove files from a pull request with a button If a user has a write permission, he can click on the ‘trash’ icon for a file right in the pull request’s “Files changed” view to make a commit and remove it. #6 Branch names in merge notification emails The email notification from GitHub about a merge will also include the name of the base branch that the change was merged into. #7 Users can create new pull requests from their repository’s Pull Requests Page In order to quickly create a pull request without having to switch back to the “Code” tab, when a user push branches while using the “Pull requests” tab, GitHub will now display the dynamic “Compare and pull request” widget. #8 Add a teammate from the team discussions page Users can add an organization member to a team directly from the team discussion page by clicking the + button inside the sidebar. #9 Collapse all diffs in a pull request at once When a pull request contains a lot of changed files, code reviewers find it hard to isolate the changes that are necessary/ important to them. Project paper cut allows them to collapse or expand the contents of all diffs in a pull request. This can be done by holding down the alt key and clicking on the inverted caret icon in any file header. They can also use the “Jump to file or symbol” dropdown to jump to the file that they are interested to review to automatically expand it. #10 Copy the URL of a comment Previously, in order to grab a permalink to a comment within an issue or pull request, users would have to copy the URL from a comment’s timestamp. They can now click Copy URL within the comment’s options menu to quickly copy the URL to the clipboard. Project Paper Cuts is solely aimed to help all developers do their best work, faster. By incorporating customers feedback into making this project, GitHub is paving the way to make small changes in the way it works. You can read the detailed announcement on the Github Blog to know more about Project Paper Cuts. Git-bug: A new distributed bug tracker embedded in git Microsoft’s GitHub acquisition is good for the open source community GitHub open sources its GitHub Load Balancer (GLB) Director  
Read more
  • 0
  • 1
  • 15215

article-image-rust-1-31-is-out-with-stable-rust-2018
Prasad Ramesh
07 Dec 2018
3 min read
Save for later

Rust 1.31 is out with stable Rust 2018

Prasad Ramesh
07 Dec 2018
3 min read
Yesterday, Rust 1.31.0 and Rust 2018 was announced in the official blog of the programming language. Rust 1.31.0 is the first stable iteration of Rust 2018 and many features in this version are now stable. Rust 2018 Rust 2018 brings all of the work that the Rust team has been doing since 2015 to create a cohesive package. This goes beyond just language features and includes tooling, documentation, domain working groups work, and a new website. Each Rust package can be in either Rust 2015 or Rust 2018 and they work seamlessly together. Projects made in Rust 2018 can use dependencies from 2015, and a 2015 project can use 2018 dependencies. This is done so that the ecosystem doesn't split. The new features are opt-in to preserve compatibility in existing code. Non-lexical lifetimes Non-lexical lifetimes or NLL simply means that the borrow checker is now smarter, and accepts some valid code that was rejected by it previously. Module system changes People new to Rust struggle with its module system. Even if there are simple and consistent rules that define the module system, their consequences can come across as inconsistent, counterintuitive and mysterious. Hence Rust 2018 introduces a few changes to how paths work. These changes ended up simplifying the module system, and now there is better clarity as to what is going on in the module system. More lifetime elision rules Some additional elision rules for impl blocks and function definitions are added. For example: impl<'a> Reader for BufReader<'a> {    // methods go here } This can now be written like this: impl Reader for BufReader<'_> {    // methods go here } Lifetimes still need to be defined in structs. But now no longer require as much boilerplate as before. const fn There are many ways to define a function in Rust. A regular function with fn An unsafe function with unsafe fn An external function with extern fn Rust 1.31 adds a new way to qualify a function: const fn. New tools in Rust 1.31 Tools like Cargo, Rustdoc, and Rustup have been crucial in Rust since version 1.0. In Rust 2018, a new generation of tools are ready for all users— Clippy: Rust's linter. Rustfmt: A tool for formatting Rust code. IDE support: Rust is now supported in popular IDEs like Visual Studio Code, IntelliJ, Atom, Sublime Text 3, Eclipse. Tool lints "tool attributes", like #[rustfmt::skip] were stabilized in Rust 1.30. In Rust 1.31, "tool lints," like #[allow(clippy::bool_comparison)] are being stabilized. These give a namespace to lints making their tool of origin more clear. Other additions Apart from changes in the language itself, there are changes to other areas too. Documentation: "The Rust Programming Language" book has been rewritten. Domain working groups: Four new domain working groups are introduced—network services, command-line applications, WebAssembly, embedded devices. New website: There’s a new iteration of the website for Rust 2018. Library stabilizations: Some From implementations have been added to stabilize libraries. Cargo changes: In Rust 1.31 cargo will download packages in parallel using HTTP/2. Rust Survey 2018 key findings: 80% developers prefer Linux, WebAssembly growth doubles, and more Rust Beta 2018 is here GitHub Octoverse: The top programming languages of 2018
Read more
  • 0
  • 0
  • 15181

article-image-tensorflow-2-0-beta-releases-with-distribution-strategy-api-freeze-easy-model-building-with-keras-and-more
Vincy Davis
10 Jun 2019
5 min read
Save for later

TensorFlow 2.0 beta releases with distribution strategy, API freeze, easy model building with Keras and more

Vincy Davis
10 Jun 2019
5 min read
After all the hype and waiting, Google has finally announced the beta version of TensorFlow 2.0. The focus feature is the tf.distribute.Strategy which distributes training across multiple GPUs, multiple machines or TPUs with minimal code changes. TensorFlow 2.0 beta version also has a number of major improvements, breaking changes and multiple bug fixes. Earlier this year, the TensorFlow team had updated the users on what to expect from TensorFlow 2.0. The 2.0 API is final with the symbol renaming/deprecation changes completed. The 2.0 API is ready and available as part of the TensorFlow 1.14 release in compat.v2 module. TensorFlow 2.0 support for Keras features Distribution Strategy for hardware The tf.distribute.Strategy supports multiple user segments, including researchers, ML engineers, etc. It also provides good performance and easy switching between strategies. Users can use the tf.distribute.Strategy API to distribute training across multiple GPUs, multiple machines or TPUs. Users can distribute their existing models and training code with minimal code changes. The tf.distribute.Strategy can be used with: TensorFlow's high level APIs Tf.keras Tf.estimator Custom training loops TenserFlow 2.0 beta also simplifies the API for custom training loops. This is also based on the distribution strategy - tf.distribute.Strategys. Custom training loops give flexibility and a greater control on training. It is also easier to debug the model and the training loop. Model Subclassing Building a fully-customizable model by subclassing tf.keras.Model, allows user to define its own forward pass. Layers can be created in the  __init__  method and set them as attributes of the class instance. The forward pass is defined in the call method. Model subclassing is particularly useful when eager execution is enabled, because it allows the forward pass to be written imperatively. Model subclassing gives greater flexibility when creating models that are not easily expressible. Breaking Changes The tf.contrib has been deprecated and its functionality has been migrated to the core TensorFlow API, to tensorflow/addons or removed entirely. In the tf.estimator.DNN/Linear/DNNLinearCombined family, the premade estimators have been updated to use the tf.keras.optimizers instead of the tf.compat.v1.train.OptimizerS. A checkpoint converter tool, for converting optimizers has also been included with this release. Bug Fixes and Other Changes This beta version of 2.0 includes many bug fixes and other changes. Some of them are mentioned below: In the tf.data.Options, the experimental_numa_aware option has been removed and a support for TensorArrays has been added. The tf.keras.estimator.model_to_estimator now supports exporting to tf.train.Checkpoint format. This allows the saved checkpoints to be compatible with model.load_weights. The tf.contrib.estimator.add_metrics has been replaced with tf.estimator.add_metrics. Gradient for SparseToDense op, GPU implementation of tf.linalg.tridiagonal_solve, broadcasting support to tf.matmul has been added. This beta version also exposes a flag that allows the number of threads to vary across Python benchmarks. The unused StringViewVariantWrapper and the tf.string_split from v2 API has been removed. The TensorFlow team has provided a TF 2.0 Testing User Group to users for any snag experience and for feedback purpose. General reaction to the release of TensorFlow 2.0 beta is positive. https://twitter.com/markcartertm/status/1137238238748266496 https://twitter.com/tonypeng_Synced/status/1137128559414087680 A user on reddit comments, “Can't wait to try that out !” However some users have compared it to PyTorch calling it more comprehensive than TensorFlow. PyTorch provides a more powerful platform for research and is good for production. A user on Hacker News comments, “Maybe I'll give TF another try, but right now I'm really liking PyTorch. With TensorFlow I always felt like my models were buried deep in the machine and it was very hard to inspect and change them, and if I wanted to do something non-standard it was difficult even with Keras. With PyTorch though, I connect things however how I want, write whatever training logic I want, and I feel like my model is right in my hands. It's great for research and proofs-of-concept. Maybe for production too.” Another user says that “Might give it another try, but my latest incursion in the Tensorflow universe did not end pleasantly. I ended up recording everything in Pytorch, took me less than a day to do the stuff that took me more than a week in TF. One problem is that there are too many ways to do the same thing in TF and it's hard to transition from one to the other.” The TensorFlow team hopes to resolve all the additional issues before the release candidate (RC) 2.0 version, including complete Keras model support on Cloud TPUs and TPU pods and improve the overall performance of 2.0. The RC release is expected sometime this summer. Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet ML.NET 1.0 RC releases with support for TensorFlow models and much more!
Read more
  • 0
  • 0
  • 15177

article-image-haskell-is-moving-to-gitlab-due-to-issues-with-phabricator
Prasad Ramesh
03 Dec 2018
3 min read
Save for later

Haskell is moving to GitLab due to issues with Phabricator

Prasad Ramesh
03 Dec 2018
3 min read
The Haskell functional programming language is moving from Phabricator to GitLab. Last Saturday, Haskell Consultant Ben Gamari listed down some details about the move in a mail. It started with a proposal to move to GitLab A few weeks back, Gamari wrote to the Haskell mailing list about moving the Glasgow Haskell Compiler (GHC) development infrastructure to GitLab. The original proposal wasn’t complete enough to be used but did provide a small test instance to experiment on. The staging URL https://gitlab.staging.haskell.org is ready to use. While this is not the final version of the migration, it does have most of the features a user would expect. Trac tickets are fully imported, including attachments Continuous integration (CI) is available via CircleCI The mirrors of all boot libraries are present Users can also login using their GitHub credentials if they choose to Issues in the migration There are also a few issues listed by Gamari that needs to be worked on: Timestamps associated with ticket open and close events aren't accurate Some of the milestone changes have problems on being imported Currently, CircleCI fails when forked Trac Wiki pages aren’t imported as of now Gamari said that the listed issues have either been resolved in the import tool or are in-progress to be resolved. The goal of this staging instance is to let contributors gain experience using GitLab and identify any obstacles in the eventual migration. Developers need to note that any comments, merge requests, or issues created on the temporary instance may not be preserved. The focus is on identifying workflows that will become harder under GitLab and ways to improve on them, pending issues in importing Trac, and areas that do not have documentation. Why the move to GitLab? The did not choose GitHub as stated by Gamari in another mail: “Its feature set is simply insufficient enough to handle the needs of a larger project like GHC”. The move to GitLab is due to a number of reasons. Phacility, the company that owns Phabricator has now closed support to non paying customers As Phalicity now focuses on paying customers, open-source parts used by GHC seem half finished Phabricator tool Harbormaster causing breaking CI Their surveys indicated developers leaning towards Git rather than the PHP tool Arcanist used by Phabricator The final migration will happen in about two weeks and the date mentioned is December 18. For more details, you can follow the Haskell mailing list. What makes functional programming a viable choice for artificial intelligence projects? GitLab open sources its Web IDE in GitLab 10.7 GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub
Read more
  • 0
  • 0
  • 15090

article-image-golang-1-11-rc1-is-here-with-experimental-port-for-webassembly
Natasha Mathur
17 Aug 2018
3 min read
Save for later

Golang 1.11 rc1 is here with experimental port for WebAssembly!

Natasha Mathur
17 Aug 2018
3 min read
Golang team released Golang 1.11 rc1 version, earlier this week. The latest release explores features like web assembly (js/wasm ), preliminary support for modules and improved support for debuggers among others. Golang is currently one of the fastest growing programming languages in the software industry. Golang’s easy syntax, concurrency, and fast nature are few of the reasons for its popularity. It is a modern programming language, created by Google back in 2009 for the 21st-century application development. Let’s have a look at the key features that come with Golang 1.11 rc1. WebAssembly ( js/wasm) WebAssembly is different in the sense that it is not processed by a CPU directly, but instead, it is an intermediate representation which is compiled to actual machine code by the WebAssembly runtime environment. Now, Go 1.11 rc1 has added an experimental port to WebAssembly (js/wasm). Go programs used to compile to only one WebAssembly module. These modules include the Go runtime for goroutine scheduling, garbage collection, maps, etc. Because of this, the resulting size would be around 2 MB, or 500 KB compressed. Go programs can call into JavaScript with the help of new experimental syscall/js package. Now, with new GOOS value "js" and GOARCH value "wasm" added to the web assembly, Go files named *_js.go or *_wasm.go will now be ignored by Go tools except for cases when GOOS/GOARCH values are being used. The GOARCH name "wasm" is the official abbreviation of WebAssembly. The GOOS name "js" is due to the host environment that executes WebAssembly bytecode are web browsers and Node.js, both of which use JavaScript to embed WebAssembly. Preliminary support for modules Go 1.11 rc1 offers preliminary support for a new concept called “modules,” which is an alternative to GOPATH with integrated support for versioning and package distribution. With modules, developers are not limited to working inside GOPATH. Also, the version dependency information is explicit yet lightweight, and builds are more reliable. Improved support for debuggers The compiler in Go 1.11 rc1 now produces significantly accurate debug information for optimized binaries. This includes variable location information, line numbers, and breakpoint locations. Due to this, it is easier to debug binaries compiled without -N -l. There are still few limitations to the quality of the debug information which will improve with the future releases. DWARF sections have been compressed by default. This is due to the accurate debug information produced by the compiler. This is transparent to most ELF tools (like debuggers on Linux and *BSD) and is supported by the Delve debugger on all platforms.   Other changes Many direct system calls have been removed from the macOS runtime. Go 1.11 binaries are now less likely to break on upgrading macOS version because system calls are made through the proper channel (libc). Go 1.11 is expected to be released later this month. For more information, check out the official release notes. Writing test functions in Golang [Tutorial] How Concurrency and Parallelism works in Golang [Tutorial] GoMobile: GoLang’s Foray into the Mobile World
Read more
  • 0
  • 0
  • 15076
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-clojure-1-10-released-with-prepl-improved-error-reporting-and-java-compatibility
Amrata Joshi
18 Dec 2018
5 min read
Save for later

Clojure 1.10 released with Prepl, improved error reporting and Java compatibility

Amrata Joshi
18 Dec 2018
5 min read
Yesterday the team at Clojure released Clojure 1.10, a dynamic, general-purpose programming language. Clojure treats the code as data and has a Lisp macro system. What’s new in Clojure 1.10? Java compatibility and dependencies Java 8 is the minimum requirement for Clojure 1.10. Clojure 1.10 comes with ASM 6.2 and updates javadoc links. Conditional logic has been removed in this release. Added type hint to address reflection ambiguity in JDK 11. spec.alpha dependency has been updated to 0.2.176 core.specs.alpha dependency has been updated to 0.2.44 Error Printing In Clojure 1.10, errors are categorized into various phases such as: :read-source: It is an error thrown while reading characters at the REPL or from a source file. :macro-syntax-check: It is a syntax error found in the syntax of a macro call, either from the spec or from a macro which throws IllegalArgumentException, IllegalStateException, or ExceptionInfo. :macroexpansion: All the errors thrown during macro evaluation are termed as macroexpansion errors. :compile-syntax-check: It is a syntax error caught during compilation. :compilation: It is a non-syntax error which is caught during compilation. :execution: Any error thrown at the execution time is termed as execution error. :read-eval-result: An error thrown while reading the result of execution is categorized as read-eval-result error. :print-eval-result: An error thrown while printing the result of execution is termed as print-eval-result error.Protocol extension by metadata This release comes with a new option, :extend-via-metadata. When :extend-via-metadata is true, values can extend the protocols by adding metadata. The protocol implementations are first checked for direct definitions such as, defrecord, deftype, reify. Further,they are checked for metadata definitions, and then for external extensions such as, extend, extend-type, extend-protocol. Tap Clojure 1.10 comes with tap, a shared and globally accessible system used for distributing a series of informational or diagnostic values to a set of handler functions. It can be used as a better debug prn and for facilities like logging. The function tap> sends a value to the set of taps. The tap function may block (e.g. for streams) and would never impede calls to tap>. Indefinite blocking ly may cause tap values to drop. Read string capture mode This release comes with read+string function that not only mimics read but also captures the string that is read. It returns both the read value and the whitespace-trimmed read string. This function requires a LineNumberingPushbackReader. Prepl (alpha) Prepl, a new stream-based REPL, comes with structured output that is suitable for programmatic use. In prepl, forms are read from the reader and return data maps for the return value (if successful), output to *out* (possibly many), output to *err*(possibly many), or tap> values (possibly many). Other functions in Clojure 1.10 include, io-prepl, a prepl bound to *in* and *out*, which works with the Clojure socket server and remote-prepl, a prepl that can be connected to a remote prepl over a socket Datafy and nav The clojure.datafy function features data transformation for objects. The datafy and nav functions can be used to transform and navigate through object graphs. datafy is still in alpha stage. Major bug fixes ASM regression has been fixed in this release. In the previous release, there were issues with deprecated JDK APIs, which has been fixed. The invalid bytecode generation for static interface method calls have been fixed. Redundant key comparisons have been removed from HashCollisionNode. This release reports correct line number for uncaught ExceptionInfo in clojure.test. Many users have appreciated the efforts taken by the Clojure team for this project. According to most of the users, this release might prove to be a better foundation for developer tooling. Users are happy with the updated debug messages and bug fixes. One user commented on HackerNews, “From the perspective of a (fairly large-scale at this point) app developer: I find it great that Clojure places such emphasis on backwards compatibility. The language has been designed by experienced and mature people and doesn't go through "let's throw everything out and start again" phases like so many other languages do.”  Though few users still prefer occasional updates instead so that they get better APIs. Rich Hickey, the creator of Clojure language has been appreciated a lot for his efforts even by the non Clojurists. One of the users commented on HackerNews, “Rich's writing, presentations, and example of overall conceptual discipline and maturity have helped me focus on the essentials in ways that I could not overstate. I'm glad (but not surprised) to see so much appreciation for him around here, even among non-Clojurists (like myself).” Though REPLs use the ubiquitous paradigm of stdio streams and are efficient, their major downside is of mingling evaluation output with printing output (the "Print" step). Few users are linking this project with the unrepl and are confused if the design works for the projects. Clojure is stable and it doesn’t have a static type system. Also, users are not happy with the changelog. However, this release has raised a few questions among the developer community. One of the big questions here is if PREPLs would replace remote APIs someday? It would be interesting to see if that really happens with the next set of Clojure releases. Get more information about Clojure 1.10 on Clojure. ClojureCUDA 0.6.0 now supports CUDA 10 Clojure 1.10.0-beta1 is out! Clojure for Domain-specific Languages – Design Concepts with Clojure  
Read more
  • 0
  • 0
  • 15074

article-image-pytorch-announces-the-availability-of-pytorch-hub-for-improving-machine-learning-research-reproducibility
Amrata Joshi
11 Jun 2019
3 min read
Save for later

PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility

Amrata Joshi
11 Jun 2019
3 min read
Yesterday, the team at PyTorch announced the availability of PyTorch Hub which is a simple API and workflow that offers the basic building blocks to improve machine learning research reproducibility. Reproducibility plays an important role in research as it is an essential requirement for a lot of fields related to research including the ones based on machine learning techniques. But most of the machine learning based research publications are either not reproducible or are too difficult to reproduce. With the increase in the number of research publications, tens of thousands of papers being hosted on arXiv and submissions to conferences, research reproducibility has now become even more important. Though most of the publications are accompanied by code and trained models that are useful but still it is difficult for users to figure out for most of the steps, themselves. PyTorch Hub consists of a pre-trained model repository that is designed to facilitate research reproducibility and also to enable new research. It provides built-in support for Colab, integration with Papers With Code and also contains a set of models including classification and segmentation, transformers, generative, etc. By adding a simple hubconf.py file, it supports the publication of pre-trained models to a GitHub repository, which provides a list of models that are to be supported and a list of dependencies that are required for running the models. For example, one can check out the torchvision, huggingface-bert and gan-model-zoo repositories. Considering the case of torchvision hubconf.py: In torchvision repository, each of the model files can function and can be executed independently. These model files don’t require any package except for PyTorch and they don't need separate entry-points. A hubconf.py can help users to send a pull request based on the template mentioned on the GitHub page. The official blog post reads, “Our goal is to curate high-quality, easily-reproducible, maximally-beneficial models for research reproducibility. Hence, we may work with you to refine your pull request and in some cases reject some low-quality models to be published. Once we accept your pull request, your model will soon appear on Pytorch hub webpage for all users to explore.” PyTorch Hub allows users to explore available models, load a model as well as understand the kind of methods available for any given model. Below mentioned are few of the examples: Explore available entrypoints: With the help of torch.hub.list() API, users can now list all available entrypoints in a repo.  PyTorch Hub also allows auxillary entrypoints apart from pretrained models such as bertTokenizer for preprocessing in the BERT models and making the user workflow smoother. Load a model: With the help of torch.hub.load() API, users can load a model entrypoint. This API can also provide useful information about instantiating the model. Most of the users are happy about this news as they think it will be useful for them. A user commented on HackerNews, “I love that the tooling for ML experimentation is becoming more mature. Keeping track of hyperparameters, training/validation/test experiment test set manifests, code state, etc is both extremely crucial and extremely necessary.” Another user commented, “This will also make things easier for people writing algorithms on top of one of the base models.” To know more about this news, check out PyTorch’s blog post. Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows F8 PyTorch announcements: PyTorch 1.1 releases with new AI tools, open sourcing BoTorch and Ax, and more Horovod: an open-source distributed training framework by Uber for TensorFlow, Keras, PyTorch, and MXNet   .
Read more
  • 0
  • 0
  • 15057

article-image-c-8-0-to-have-async-streams-recursive-patterns-and-more
Prasad Ramesh
14 Nov 2018
4 min read
Save for later

C# 8.0 to have async streams, recursive patterns and more

Prasad Ramesh
14 Nov 2018
4 min read
C# 8.0 will introduce some new features and will likely ship out the same time as .NET Core 3.0. Developers will be able to use the new features with Visual Studio 2019. Nullable reference types in C# 8.0 This feature aims to help prevent the null reference exceptions that appear everywhere. They have riddled object-oriented programming for half a century now. The null reference exceptions stop developers from using null in ordinary reference types like string. They make these types non-nullable. They are warnings, however, not errors. On existing code, there will be new warnings. Developers will have to opt into using the new feature at the project, file, or source line level. C# 8.0 will let you express your “nullable intent”, and throws a warning when you don’t follow it. string s = null; // Warning: Assignment of null to non-nullable reference type string? s = null; // Ok Asynchronous streams with IAsyncEnumerable<T> The async feature that was from C# 5.0 lets developers consume and produce asynchronous results. This is in straightforward code, without callbacks. This isn’t helpful when developers want to consume or produce continuous streams of results. For example, data from an IoT device or a cloud service. Async streams are present for this use. C# 8.0 will come with IAsyncEnumerable<T>. It is an asynchronous version of the existing IEnumerable<T>. Now you can await foreach over functions to consume their elements, then yield return to them in order to produce elements. async IAsyncEnumerable<int> GetBigResultsAsync() {    await foreach (var result in GetResultsAsync())    {        if (result > 20) yield return result;    } } Ranges and indices A type Index is added which can be used for indexing. A type index can be created from an int that counts from the beginning. It can be alternatively created with a prefix ^ operator that counts from the end. Index i1 = 3;  // number 3 from beginning Index i2 = ^4; // number 4 from end int[] a = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 }; Console.WriteLine($"{a[i1]}, {a[i2]}"); // "3, 6" C# 8.0 will have an added Range type consisting of two Indexes. One will be for the start and one for the end. They can be written with an x..y range expression. Default implementations of interface members Currently, once an interface is published members can’t be added anymore without breaking all its existing implementers. With the new release, a body for an interface member can be provided. If somebody doesn’t implement that member, the default implementation will be available instead. Allowing recursive patterns C# 8.0 will allow patterns to contain other patterns. IEnumerable<string> GetEnrollees() {    foreach (var p in People)    {        if (p is Student { Graduated: false, Name: string name }) yield return name;    } } The pattern in the above code checks that the Person is a Student. It then applies the constant pattern false to their Graduated property to see if they’re still enrolled. Then checks if the pattern string name to their Name property to get their name. Hence, if p is a Student who has not graduated and has a non-null name, that name will yield return. Switch expressions Switch statements with patterns a powerful feature in C# 7.0. But since they can be cumbersome to write, the next C# version will have switch expressions. They are a lightweight version of switch statements, where all the cases are expressions. Target-typed new-expressions In many cases, on creating a new object, the type is already given from context. C# 8.0 will let you omit the type in those cases. For more details, visit the Microsoft Blog. ReSharper 18.2 brings performance improvements, C# 7.3, Blazor support and spellcheck Qml.Net: A new C# library for cross-platform .NET GUI development Microsoft announces .NET standard 2.1
Read more
  • 0
  • 0
  • 15042

article-image-github-october-21st-outage-rca-how-prioritizing-data-integrity-launched-a-series-of-unfortunate-events-that-led-to-a-day-long-outage
Natasha Mathur
31 Oct 2018
7 min read
Save for later

GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage

Natasha Mathur
31 Oct 2018
7 min read
Yesterday, GitHub posted the root-cause analysis of its outage that took place on 21st October. The outage started at 23:00 UTC on 21st October and left the site broken until 23:00 UTC, 22nd October. Although the backend git services were up and running during the outage, multiple internal systems were affected. Users were unable to log in, submit Gists or bug reports, outdated files were being served, branches went missing, and so forth. Moreover, GitHub couldn’t serve webhook events or build and publish GitHub Pages sites. “At 22:52 UTC on October 21, routine maintenance work to replace failing 100G optical equipment resulted in the loss of connectivity between our US East Coast network hub and our primary US East Coast data center. Connectivity between these locations was restored in 43 seconds, but this brief outage triggered a chain of events that led to 24 hours and 11 minutes of service degradation” mentioned the GitHub team. GitHub uses MySQL to store GitHub metadata. It operates multiple MySQL clusters of different sizes. Each cluster consists of up to dozens of read replicas that help GitHub store non-Git metadata. These clusters are how GitHub’s applications are able to provide pull requests and issues, manage authentication, coordinate background processing, and serve additional functionality beyond raw Git object storage. For improved performance, GitHub applications direct writes to the relevant primary for each cluster, but delegate read requests to a subset of replica servers. Orchestrator is used to managing the GitHub’s MySQL cluster topologies. It also handles automated failover. Orchestrator considers a number of factors during this process and is built on top of Raft for consensus. In some cases, Orchestrator implements topologies that the applications are unable to support, which is why it is very crucial to align Orchestrator configuration with application-level expectations. Here’s a timeline of the events that took place on 21st October leading to the Outage 22:52 UTC, 21st Oct Orchestrator began a process of leadership deselection as per the Raft consensus. After the Orchestrator managed to organize the US West Coast database cluster topologies and the connectivity got restored, write traffic started directing to the new primaries in the West Coast site.   The database servers in the US East Coast data center contained writes that had not been replicated to the US West Coast facility. Due to this, the database clusters in both the data centers included writes that were not present in the other data center. This is why the GitHub team was unable to failover (a procedure via which a system automatically transfers control to a duplicate system on detecting failures) the primaries back over to the US East Coast data center safely. 22:54 UTC, 21st Oct GitHub’s internal monitoring systems began to generate alerts indicating that the systems are undergoing numerous faults. By 23:02 UTC, GitHub engineers found out that the topologies for numerous database clusters were in an unexpected state. Later, Orchestrator API displayed a database replication topology including the servers only from the US West Coast data center.   23:07 UTC, 21st Oct The responding team then manually locked the deployment tooling to prevent any additional changes from being introduced. At 23:09 UTC, the site was placed into yellow status. At 23:11 UTC, the incident coordinator changed the site status to red.   23:13 UTC, 21st Oct As the issue had affected multiple clusters, additional engineers from GitHub’s database engineering team started investigating the current state. This was to determine the actions that should be taken to manually configure a US East Coast database as the primary for each cluster and rebuild the replication topology. This was quite tough as the West Coast database cluster had ingested writes from GitHub’s application tier for nearly 40 minutes.   To preserve this data, engineers decided that 30+ minutes of data written to the US West Coast data center. This prevented them from considering options other than failing-forward in order to keep the user data safe. So, they further extended the outage to ensure the consistency of the user’s data. 23:19 UTC, 21st Oct After querying the state of the database clusters, GitHub stopped running jobs that write metadata about things such as pushes. This lead to partially degraded site usability as the webhook delivery and GitHub Pages builds had been paused.   “Our strategy was to prioritize data integrity over site usability and time to recovery” as per the GitHub team. 00:05 UTC, 22nd Oct Engineers started resolving data inconsistencies and implementing failover procedures for MySQL.Recovery plan included failing forward, synchronization, fall back, then churning through backlogs before returning to green.   The time needed to restore multiple terabytes of backup data caused the process to take hours. The process to decompress, checksum, prepare, and load large backup files onto newly provisioned MySQL servers took a lot of time. 00:41 UTC, 22nd Oct A backup process started for all affected MySQL clusters. Multiple teams of engineers started to investigate ways to speed up the transfer and recovery time.   06:51 UTC, 22nd Oct Several clusters completed restoration from backups in the US East Coast data center and started replicating new data from the West Coast. This resulted in slow site load times for pages executing a write operation over a cross-country link.   The GitHub team identified the ways to restore directly from the West Coast in order to overcome the throughput restrictions caused by downloading from off-site storage. The status page was further updated to set an expectation of two hours as the estimated recovery time. 07:46 UTC, 22nd Oct GitHub published a blog post for more information. “We apologize for the delay. We intended to send this communication out much sooner and will be ensuring we can publish updates in the future under these constraints”, said the GitHub team.   11:12 UTC, 22nd Oct All database primaries established in US East Coast again. This resulted in the site becoming far more responsive as writes were now directed to a database server located in the same physical data center as GitHub’s application tier. This improved performance substantially but there were dozens of database read replicas that delayed behind the primary. These delayed replicas made users experience inconsistent data on GitHub.   13:15 UTC, 22nd Oct GitHub.com started to experience peak traffic load and the engineers began to provide the additional MySQL read replicas in the US East Coast public cloud earlier in the incident.   16:24 UTC, 22nd Oct Once the replicas got in sync, a failover to the original topology was conducted. This addressed the immediate latency/availability concerns. The service status was kept red while GitHub began processing the backlog of data accumulated. This was done to prioritize data integrity.   16:45 UTC, 22nd Oct At this time, engineers had to balance the increased load represented by the backlog. This potentially overloaded GitHub’s ecosystem partners with notifications. There were over five million hook events along with 80 thousand Pages builds queued.   “As we re-enabled processing of this data, we processed ~200,000 webhook payloads that had outlived an internal TTL and were dropped. Upon discovering this, we paused that processing and pushed a change to increase that TTL for the time being”, mentions the GitHub team.   To avoid degrading the reliability of their status updates, GitHub remained in degraded status until the entire backlog of data had been processed. 23:03 UTC, 22nd Oct At this point in time, all the pending webhooks and Pages builds had been processed. The integrity and proper operation of all systems had also been confirmed. The site status got updated to green.     Apart from this, GitHub has identified a number of technical initiatives and continue to work through an extensive post-incident analysis process internally. “All of us at GitHub would like to sincerely apologize for the impact this caused to each and every one of you. We’re aware of the trust you place in GitHub and take pride in building resilient systems that enable our platform to remain highly available. With this incident, we failed you, and we are deeply sorry”, said the GitHub team. For more information, check out the official GitHub Blog post. Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows GitHub is bringing back Game Off, its sixth annual game building competition, in November GitHub comes to your code Editor; GitHub security alerts now have machine intelligence
Read more
  • 0
  • 0
  • 15034
article-image-fedora-workstation-31-to-come-with-wayland-support-improved-core-features-of-pipewire-and-more
Bhagyashree R
26 Jun 2019
3 min read
Save for later

Fedora Workstation 31 to come with Wayland support, improved core features of PipeWire, and more

Bhagyashree R
26 Jun 2019
3 min read
On Monday, Christian F.K. Schaller, Senior Manager for Desktop at Red Hat, shared a blog post that outlined the various improvements and features coming in Fedora Workstation 31. These include Wayland improvements, more PipeWire functionality, continued improvements around Flatpak, Fleet Commander, and more.  Here are some of the enhancements coming to Fedora Workstation 31: Wayland transitioning to complete soon Wayland is a desktop server protocol that was introduced to replace the X Windowing System with a modern and simpler windowing system in Linux and other Unix-like operating systems. The team is focusing on removing the X Windowing System dependency so that the GNOME Shell will be able to run without the need of XWayland.  Schaller shared that the work related to removing X dependency is done for the shell itself. However, some things are left in regards to the GNOME Setting daemon. Once this work is complete an X server (XWayland) will only start if an X application is run and will shut down when the application is stopped. Another aspect that the team is working on is allowing X applications to run as root under XWayland. Running desktop applications as root is generally not considered safe. However, there are few applications that only work when they are run as root. This is why the team has decided to continue support for running applications as root in XWayland. The team is also adding support for NVidia binary driver to allow running a native Wayland session on top of the binary driver. PipeWire with improved desktop sharing portal PipeWire is a multimedia framework that aims to improve the handling of audio and video in Linux. This release will come with more improved core features of PipeWire. The existing desktop sharing portal is now enhanced and will soon have Miracast support. The team’s ultimate goal is to make the GNOME integration even more seamless than the standalone app.  Better infrastructure for building Flatpaks Flatpak is a utility for software deployment and package management in Linux. The team is making the infrastructure for building Flatpaks from RPMS better. They will also be offering applications from flathub.io and quay.io out of the box and in accordance with Fedora rules for third-party software. The team will also be making a Red Hat UBI based runtime available. A third-party developer can use this runtime to build their applications and be sure that it will be supported by Red Hat for the lifetime of a given RHEL release. Fedora Toolbox with improved GNOME Terminal  Fedora Toolbox is a tool that gives developers a seamless experience when using an immutable OS like Silverblue. Currently, improvements are being done to GNOME Terminal that will ensure a more natural behavior inside the terminal when interacting with pet containers. The is looking for ways to make the selection of containers more discoverable so that developers will easily get access to a Red Hat UBI container or a Red Hat TensorFlow container for instance.  Along with these, the team is improving the infrastructure for Linux fingerprint reader support, securing Gamemode, adding support for Dell Totem, improving media codec support, and more. To know more in detail check out Schaller’s blog post. Fedora 30 releases with GCC 9.0, GNOME 3.32, performance improvements, and much more! Fedora 30 Beta released with desktop environment options, GNOME 3.32, and much more Fedora 31 will now come with Mono 5 to offer open-source .NET support
Read more
  • 0
  • 0
  • 15022

article-image-qt-introduces-qt-for-mcus-a-graphics-toolkit-for-creating-a-fluid-user-interface-on-microcontrollers
Vincy Davis
22 Aug 2019
2 min read
Save for later

Qt introduces Qt for MCUs, a graphics toolkit for creating a fluid user interface on microcontrollers

Vincy Davis
22 Aug 2019
2 min read
Yesterday, the Qt team introduced a new graphics toolkit called Qt for MCUs for creating fluid user interfaces (UIs) on cost-effective microcontrollers (MCUs). The toolkit will enable new and existing users to take advantage of the existing Qt tools and libraries used for Device Creation, thus enabling companies to provide better user experience.  Petteri Holländer, the Senior Vice President of Product Management at Qt said, “With the introduction of Qt for MCUs, customers can now use Qt for almost any software project they’re working on, regardless of target – with the added convenience of using just one technology framework and toolset.” He further adds, “This means that both existing and new Qt customers can pursue the many business growth opportunities offered by connected devices – across a wide and diverse range of industries.” Qt for MCUs utilizes the Qt Modeling Language (QML) and the developer-designing tools for constructing a fast and customized Qt application. “With the frontend defined in declarative QML and the business logic implemented in C/C++, the end result is a fluid graphical UI application running on microcontrollers,”  says the Qt team. Key benefits offered by Qt for MCUs Existing skill sets can be reused for Qt for microcontrollers Same technology can be used in high-end and mass market devices, thus yielding low maintenance cost No compromise on graphics performance, hence reduced hardware costs Users can upgrade to the cross-platform graphical toolkit from a legacy solution Check out the Qt for MCUs website for more information. Qt and LG Electronics partner to make webOS as the platform of choice for embedded smart devices Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more Qt Creator 4.9.0 released with language support, QML support, profiling and much more
Read more
  • 0
  • 0
  • 15020

article-image-net-announcements-preview-2-of-net-core-2-2-and-entity-framework-core-2-2-c-7-3-and-ml-net-0-5
Savia Lobo
13 Sep 2018
5 min read
Save for later

.NET announcements: Preview 2 of .NET Core 2.2 and Entity Framework Core 2.2, C# 7.3, and ML.NET 0.5

Savia Lobo
13 Sep 2018
5 min read
Yesterday, the .NET community announced the second preview of .NET Core 2.2 and the Entity Framework 2.2. They also released C# version 7.3 and ML.NET 0.5. Let’s have a look at the highlights and features of each of these announcements. .NET Core 2.2 Preview 2 .NET Core 2.2 Preview 2 can be used with Visual Studio 15.8, Visual Studio for Mac and Visual Studio Code. Following are two highlights of this release. Tiered Compilation Enabled The tiered compilation is enabled by default. The tiered compilation was available as part of the .NET Core 2.1 release. During that time, one had to enable tiered compilation via application configuration or an environment variable. It is now enabled by default and can be disabled, as needed. In the image below, the baseline is .NET Core 2.1 RTM, running in a default configuration, with tiered compilation disabled. The second scenario has tiered compilation. One can see a significant request-per-second (RPS) throughput benefit with tiered compilation enabled. The numbers in the chart are scaled so that baseline always measures 1.0. Such an approach makes it very easy to calculate performance changes as a percentage. The first two tests are TechEmpower benchmarks and the last one is Music Store, a frequent sample ASP.NET app. Platform Support .NET Core 2.2 is supported on the following operating systems: Windows Client: 7, 8.1, 10 (1607+) Windows Server: 2008 R2 SP1+ macOS: 10.12+ RHEL: 6+ Fedora: 27+ Ubuntu: 14.04+ Debian: 8+ SLES: 12+ openSUSE: 42.3+ Alpine: 3.7+ Read about the .NET Core 2.2 Preview 2 in detail, on Microsoft blog. Entity Framework Core 2.2 Preview 2 This preview includes a large number of bug fixes and two additional important previews, one is a data provider for Cosmos DB and the second one, new spatial extensions for .NET’s  SQL Server and in-memory providers. New EF Core provider for Cosmos DB This new provider enables developers (familiar with the EF programming model) to easily target Azure Cosmos DB as an application database. It also includes global distribution, elastic scalability, ‘always on’ availability, very low latency, and automatic indexing. Spatial extensions for SQL Server and in-memory This implementation picks the NetTopologySuite library that the PostgreSQL provider uses as the source of spatial .NET types. NetTopologySuite is a database-agnostic spatial library that implements standard spatial functionality using .NET idioms like properties and indexers. The extension then adds the ability to map and convert instances of these types to the column types supported by the underlying database, and usage of methods defined on these types in LINQ queries, to SQL functions supported by the underlying database. Read more about the Entity Framework Core 2.2 Preview 2 on the Microsoft blog. C# 7.3 C# 7.3 is the newest point release in the 7.0 family. Along with new compiler options, there are two main themes to the C# 7.3 release One provides features that enable safe code to be as performant as unsafe code. The second provides incremental improvements to existing features. New features that support the theme of better performance for safe code: Access to fixed fields without pinning. Easy Reassign ref local variables. Use of initializers on stackalloc arrays. Easy use of fixed statements with any type that supports a pattern. One can use additional generic constraints. The new compiler options in C# 7.3 are: -publicsign to enable Open Source Software (OSS) signing of assemblies. -pathmap to provide a mapping for source directories. Read more about the C# 7.3 in detail in its documentation notes. ML.NET 0.5 The .NET community released ML.NET version 0.5. ML.NET is a cross-platform, open source machine learning framework for .NET developers. This version release includes two key highlights: Addition of a TensorFlow model scoring transform (TensorFlowTransform) Starting from this version, the community plans to add support for Deep Learning in ML.NET. Following this, they introduced the TensorFlowTransform which enables taking an existing TensorFlow model, either trained by the user or downloaded from somewhere else, and get the scores from the TensorFlow model in ML.NET. This new TensorFlow scoring capability doesn’t require one to have a working knowledge of TensorFlow internal details. The implementation of this transform is based on code from TensorFlowSharp. One can simply add a reference to the ML.NET NuGet packages in thier .NET Core or .NET Framework apps. Under the covers, ML.NET includes and references the native TensorFlow library which allows writing code that loads an existing trained TensorFlow model file for scoring. New ML.NET API proposal exploration The new ML.NET API offers more flexible capabilities than what the current LearningPipeline API offers. The LearningPipeline API will be deprecated when this new API is ready. The new ML.NET API offers attractive features which aren’t possible with the current LearningPipeline API. These include: Strongly-typed API  takes advantage of C# capabilities. This helps errors to be discovered in compilation time along with improved Intellisense in the editors. Better flexibility: This new API provides a decomposable train and predict process, eliminating rigid and linear pipeline execution. Improved usability: This new API makes direct call to the APIs from user’s code. No more scaffolding or insolation layer creating an obscure separation between what the user/developer writes and the internal APIs. Entrypoints are no longer mandatory. Ability to simply score with TensorFlow models: One can also simply load a TensorFlow model and score by using it without needing to add any additional learner and training process. Better visibility of the transformed data: User’s have better visibility of the data while applying transformers. As this API inclusion will be a significant change in ML.NET, the community has started an open discussion where users can provide their feedback and help shape the long-term API for ML.NET. Users can share their feedback on ML.NET GitHub repo. Read more about ML.NET 0.5 in detail, on Microsoft blog. Task parallel library for easy multi-threading in .NET Core [Tutorial] Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial] Microsoft’s .NET Core 2.1 now powers Bing.com  
Read more
  • 0
  • 0
  • 15012
article-image-net-core-completes-move-to-the-new-compiler-ryujit
Richa Tripathi
27 Jun 2018
2 min read
Save for later

.NET Core completes move to the new compiler - RyuJIT

Richa Tripathi
27 Jun 2018
2 min read
The .NET team has announced that have completely moved the .NET Core platform to RyuJIT, the compiler written in-house by Microsoft. The team had been long working on this shift to make the compilation faster for .NET Core applications given that web applications today take time to start up. JIT compiler is a program that converts the instructions written in .NET Core to native machine code so that it can be sent to the processor for processing action. The JIT compilers have become a standard to support the compilation for various platforms. They are an improvement over the traditional compilers which require the programs to re-compile when using on different computer systems. RyuJIT is developed by the .NET Core team as the next generation 64-bit compiler that will compile programs twice as fast. The .NET Core compiled with this JIT compiler is recorded to have 30% improved faster start-up time. Also the apps compiled with the RvyJIT produce great code that run efficiently on the servers. The most important factor that helped the performance was basing the RyuJIT to x64, shifting from x86 codebase. One of the major stability factors this will bring is that .NET programs will perform consistently across various architectures and will provide compatibility for .NET programs across the platforms like ARM, mobile, among others. This will help developers maintain a codebase that compiles on both 64-bit and 32-bit compilers and perform on both types of systems. The .NET team has promised the stability of the platform after this move and are expecting the performance to improve. The team is inviting developers to join the community and has put the documentation for the RyuJIT on the GitHub repository. Applying Single Responsibility principle from SOLID in .NET Core Microsoft Open Sources ML.NET, a cross-platform machine learning framework What is ASP.NET Core?
Read more
  • 0
  • 0
  • 14986

article-image-llvm-7-0-0-released-with-improved-optimization-and-new-tools-for-monitoring
Prasad Ramesh
20 Sep 2018
3 min read
Save for later

LLVM 7.0.0 released with improved optimization and new tools for monitoring

Prasad Ramesh
20 Sep 2018
3 min read
LLVM is a collection of tools used to develop compiler front ends and back ends. LLVM 7.0.0 has now been released with new tools and features such as performance measurement, optimization and others. The Windows installer in LLVM 7.0.0 no longer includes a Visual Studio integration. Now, there is a new LLVM Compiler Toolchain Visual Studio extension on the Visual Studio Marketplace. This new integration method supports Visual Studio 2017. The libraries are renamed from 7.0 to 7. Note that this change also impacts downstream libraries like lldb. The LoopInstSimplify pass (-loop-instsimplify) is removed in this release. When using Windows x or w IR mangling schemes, symbols starting with ? are no longer mangled by LLVM. A new tool called llvm-exegesis has been added. This new tool automatically measures instruction scheduling properties and provides a principled way to edit scheduling models. Another new tool llvm-mca is a static performance analysis tool that uses information to statically predict the performance of machine code for a specific CPU. The optimization of floating-point casts is also improved. It provides optimized results for code that relies on the undefined behavior of overflowing casts. The optimization feature is on by default and can be disabled by specifying a function attribute: "strict-float-cast-overflow"="false" This attribute can be created by the clang option -fno-strict-float-cast-overflow. To detect affected patterns code sanitizers can be used. The clang option for detecting only this problem alone is -fsanitize=float-cast-overflow. A demonstration is as follows: int main() {  float x = 4294967296.0f;  x = (float)((int)x);  printf("junk in the ftrunc: %f\n", x);  return 0; } And the clang options is run: clang -O1 ftrunc.c -fsanitize=float-cast-overflow ; ./a.out ftrunc.c:5:15: runtime error: 4.29497e+09 is outside the range of representable values of type 'int' junk in the ftrunc: 0.000000 LLVM_ON_WIN32 is no longer set by files in llvm/Config/config.h and llvm/Config/llvm-config.h. If you have used this macro before, now use the compiler-set _WIN32 instead, which is set exactly when LLVM_ON_WIN32 used to be set. The DEBUG macro has been renamed to LLVM_DEBUG, but the interface remains the same. SmallVector<T, 0> is shrunk from sizeof(void*) * 4 + sizeof(T) to sizeof(void*) + sizeof(unsigned) * 2. It is smaller than std::vector<T> on 64-bit platforms. The maximum capacity for it is now restricted to UINT32_MAX. Experimental support is added for DWARF v5 debugging. This includes the new .debug_names accelerator table. The opt tool supports the -load-pass-plugin option to load pass plugins for the new PassManager. Support is added for profiling JIT-ed code with perf. In LLVM 7.0.0 support for the .rva assembler directive for COFF targets is added. For Windows, the llvm-rc tool has also received minor upgrades. There are still some known missing features but it should be usable in most cases. On request, CodeView debug info can now be emitted for MinGW configurations. There are also changes to variety of targets like AArch64 Target, ARM, x86 among others. For a complete list of updates, visit the LLVM website. JUnit 5.3 brings console output capture, assertThrow enhancements and parallel test execution Mastodon 2.5 released with UI, administration, and deployment changes ReSharper 18.2 brings performance improvements, C# 7.3, Blazor support and spellcheck
Read more
  • 0
  • 0
  • 14880
Modal Close icon
Modal Close icon