Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-julia-announces-the-preview-of-multi-threaded-task-parallelism-in-alpha-release-v1-3-0
Vincy Davis
24 Jul 2019
5 min read
Save for later

Julia announces the preview of multi-threaded task parallelism in alpha release v1.3.0

Vincy Davis
24 Jul 2019
5 min read
Yesterday, Julia team announced the alpha release of v1.3.0, which is an early preview of Julia version 1.3.0, expected to be out in a couple of months. The alpha release includes a preview of a new threading interface for Julia programs called multi-threaded task parallelism. The task parallelism model allows many programs to be marked in parallel for execution, where a ‘task’ will run all the codes simultaneously on the available thread. This functionality works similar to a GC model (garbage collection) as users can freely release millions of tasks and not worry about how the libraries are implemented. This portable model has been included over all the Julia packages. Read Also: Getting started with Z Garbage Collector (ZGC) in Java 11 [Tutorial] Jeff Bezanson and Jameson Nash from Julia Computing, and Kiran Pamnany from Intel say the Julia task parallelism is “inspired by parallel programming systems like Cilk, Intel Threading Building Blocks(TBB) and Go”. With multi-threaded task parallelism, Julia model can schedule many parallel tasks that call library functions. This works smoothly as the CPUs are not overcrowded with threads. This acts as an important feature for high-level languages as they require library functions frequently. How to resolve challenges while implementing task parallelism Allocating and switching task stacks Each task requires its own execution stack distinct from the usual process or thread stacks provided by Unix operating systems. Julia has an alternate implementation of stack switching which trades time for memory when a task switches. However, it may not be compatible with foreign code that uses cfunction. This implementation is used when stacks consume large address space. Event loop thread issues an async signal If a thread needs an event loop thread to wake up, it issues an async signal. This may be due to another thread scheduling new work, or a thread which is beginning to run garbage collection, or a thread which wants to take the I/O lock to perform I/O. Task migration across system threads In general, a task may start running on one thread, block for a while, and then restart on another thread. Julia uses thread-local variables every time a memory is allocated internally. Currently, a task always runs on the thread it started running on initially. To support this, Julia is using the concept of a sticky task where a task must run on a given thread and per-thread queues for running tasks associated with each thread. Sleeping idle threads To avoid 100% usage of CPUs all the time, some tasks are made to sleep. This can lead to a synchronization problem as some threads might be scheduled for new work while others have been kept on sleep. Dedicated scheduler task cause overhead problem When a task is blocked, the scheduler is called to pick another task to run. But, on what stack does the code run? It is possible to have a dedicated scheduler task; however, it may cause less overhead if the scheduler code runs in the context of the recently-blocked task. One suggested measure is to pull a task out of the scheduler queue to avoid switch away. Classic bugs The Julia team faced many difficult bugs while implementing multi-threaded functionality. One of the many bug was a mysterious one on Windows which got fixed by flipping a single bit. Future goals for Julia version 1.3.0 increase performance work on task switch and the I/O latency allow task migration use multiple threads in the compiler improved debugging tools provide alternate schedulers Developers are impressed with the new multithreaded parallelism functionality. A user on Hacker News comments “Great to see this finally land - thanks for all the team's work. Looking forward to giving it a whirl. Threading is something of a prerequisite for acceptance as a serious language among many folks. So great to not just check that box, but to stick the pen right through it. The devil is always in the details, but from the doc the interface looks pretty nice.” Another user says, “This is huge! I was testing out the master branch a few days ago and the parallelism improvements were amazing.” Many users are expecting Julia to challenge Python in the future. A comment on Hacker News reads “Not only is this huge for Julia, but they've just thrown down the gauntlet. The status quo has been upset. I expect Julia to start eating everyone's lunch starting with Python. Every language can use good concurrency & parallelism support and this is the biggest news for all dynamic languages.” Another user says, “I worked in a computational biophysics department with lots of python/bash/R and I was the only one who wrote lots of high-performance code in Julia. People were curious about the language but it was still very much unknown. Hope we will see a broader adoption of Julia in the future - it's just that it is much better for the stuff we do on a daily basis.” To learn how to implement Julia using task parallelism, head over to Julia blog. Mozilla is funding a project for bringing Julia to Firefox and the general browser environment Announcing Julia v1.1 with better exception handling and other improvements Julia for machine learning. Will the new language pick up pace?
Read more
  • 0
  • 0
  • 16949

article-image-gnu-community-announces-parallel-gcc-for-parallelism-in-real-world-compilers
Savia Lobo
16 Sep 2019
5 min read
Save for later

GNU community announces ‘Parallel GCC’ for parallelism in real-world compilers

Savia Lobo
16 Sep 2019
5 min read
Yesterday, the team behind the GNU project announced Parallel GCC, a research project aiming to parallelize a real-world compiler. Parallel GCC can be used in machines with many cores where GNU cannot provide enough parallelism. A parallel GCC can be also used to design a parallel compiler from scratch. GCC is an optimizer compiler that automatically optimizes code when compiling. GCC optimization phase involves three steps: Inter Procedural Analysis (IPA): This builds a callgraph and uses it to decide how to perform optimizations. GIMPLE Intra Procedural Optimizations: This performs several hardware-independent optimizations inside the function. RTL Intra Procedural Optimizations: This performs several hardware-dependent optimizations inside the function. As IPA collects information and decides how to optimize all functions, it then sends a function to the GIMPLE optimizer, which then sends the function to the RTL optimizer, and the final code is generated. This process repeats for every function in the code. Also Read: Oracle introduces patch series to add eBPF support for GCC Why a Parallel GCC? The team designed the parallel architecture to increase parallelism and reduce overhead. While IPA finishes its analysis, a number of threads equal to the number of logical processors are spawned to avoid scheduling overhead. Further, one of those thread inserts all analyzed functions into a threadsafe producer-consumer queue, which all threads are responsible to consume. Once a thread has finished processing one function, it queries for the next function available in the queue, until it finds an EMPTY token. When it happens, the thread should finalize as there are no more functions to be processed. This architecture is used to parallelize per-function GIMPLE Intra Process Optimizations and can be easily extended to also support RTL Intra Process Optimizations. This, however, does not cover IPA passes nor the per-language Front End analysis. Code refactoring to achieve Parallel GCC The team refactored several parts of the GCC middle-end code in the Parallel GCC project. The team says there are still many places where code refactoring is necessary for this project to succeed. “The original code required a single function to be optimized and outputted from GIMPLE to RTL without any possible change of what function is being compiled,” the researchers wrote in their official blog. Several structures in GCC were made per-thread or threadsafe, either being replicated by using the C11 thread notation, by allocating the data structure in the thread stack, or simply inserting locks. “One of the most tedious parts of the job was detecting making several global variables threadsafe, and they were the cause of most crashes in this project. Tools made for detecting data-races, such as Helgrind and DRD, were useful in the beginning but then showed its limitations as the project advanced. Several race conditions had a small window and did not happen when the compiler ran inside these tools. Therefore there is a need for better tools to help to find global variables or race conditions,” the blog mentions. Performance results The team compiled the file gimple-match.c, the biggest file in the GCC project. This file has more than 100,000 lines of code, with around 1700 functions, and almost no loops inside these functions. The computer used in this Benchmark had an Intel(R) Core(TM) i5-8250U CPU, with 8Gb of RAM. Therefore, this computer had a CPU with 4 cores with Hyperthreading, resulting in 8 virtual cores. The following are the results before and after Intra Procedural GIMPLE parallelization. Source: gcc.gnu.org The figure shows our results before and after Intra Procedural GIMPLE parallelization. In this figure, we can observe that the time elapsed, dropped from 7 seconds to around 4 seconds with 2 threads and around 3 seconds with 4 threads, resulting in a speedup of 1.72x and 2.52x, respectively. Here we can also see that using Hyperthreading did not impact the result. This result was used to estimate the improvement in RTL parallelization. Source: gcc.gnu.org The above results show when compared with the total compilation time, there is a small improvement of 10% when compiling the file. Source: gcc.gnu.org In this figure using the same approach as in the previous graph, users can estimate a speedup of 1.61x in GCC when it gets parallelized by using the speedup information obtained in GIMPLE. The team has suggested certain To-Dos for users wanting to implement parallel GCC: Find and fix all race conditions in GIMPLE. There are still random crashes when a code is compiled using the parallel option. Make this GCC compile itself. Make this GCC pass all tests in the testsuite. Add support to a multithread environment to Garbage Collector. Parallelize RTL part. This will improve our current results, as indicated in Results chapter. Parallelize IPA part. This can also improve the time during LTO compilations. Refactor all occurrences of thread by allocating these variables as soon as threads are started, or at a pass execution. GCC project members say that this project is under development and still has several bugs. A user on Hacker News writes, “I look forward to this. One that will be important for reproducible builds is having tests for non-determinism. Having nondeterministic code gen in a compiler is a source of frustration and despair and sucks to debug.” To know about the Parallel GCC in detail, read the official document. Other interesting news in programming Introducing ‘ixy’, a simple user-space network driver written in high-level languages like Rust, Go, and C#, among others  GNOME 3.34 releases with tab pinning, improved background panel, custom folders and more! The Eclipse Foundation releases Jakarta EE 8, the first truly open-source, vendor-neutral Java EE
Read more
  • 0
  • 0
  • 16935

article-image-microsoft-announces-its-support-for-bringing-exfat-in-the-linux-kernel-open-sources-technical-specs
Bhagyashree R
29 Aug 2019
3 min read
Save for later

Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs

Bhagyashree R
29 Aug 2019
3 min read
Yesterday, Microsoft announced that it supports the addition of its Extended File Allocation Table (exFAT) file system in the Linux kernel and publicly released its technical specifications. https://twitter.com/OpenAtMicrosoft/status/1166742237629308928 Launched in 2006, the exFAT file system is the successor to Microsoft's FAT and FAT32 file systems that are widely used in a majority of flash memory storage devices such as USB drives and SD cards. It uses 64-bits to describe file size and allows for clusters as large as 32MB. As per the specification, it was implemented with simplicity and extensibility in mind. John Gossman, Microsoft Distinguished Engineer, and Linux Foundation Board Member wrote in the announcement, “exFAT is the Microsoft-developed file system that’s used in Windows and in many types of storage devices like SD cards and USB flash drives. It’s why hundreds of millions of storage devices that are formatted using exFAT “just work” when you plug them into your laptop, camera, and car.” As exFAT was proprietary previously, mounting these flash drives and cards on Linux machines required installing additional software such as FUSE-based exFAT implementation. Supporting exFAT in the Linux kernel will provide users its full-featured implementation and can also be more performant as compared to the FUSE implementation. Also, its inclusion in OIN's Linux System Definition will allow its cross-licensing in a royalty-free manner. Microsoft shared that the exFAT code incorporated into the Linux kernel will be licensed under GPLv2. https://twitter.com/OpenAtMicrosoft/status/1166773276166828033 In addition to supporting exFAT in the Linux kernel, Microsoft also hopes that its specifications become a part of the Open Invention Network’s (OIN) Linux definition. Keith Bergelt, OIN's CEO, told ZDNet, "We're happy and heartened to see that Microsoft is continuing to support software freedom. They are giving up the patent levers to create revenue at the expense of the community. This is another step of Microsoft's transformation in showing it's truly committed to Linux and open source." The next edition of the Linux System Definition is expected to publish in the first quarter of 2020, post which any member of the OIN will be able to use exFAT without paying a patent royalty. The Linux Foundation also appreciated Microsoft's move to bring exFAT in the Linux kernel: https://twitter.com/linuxfoundation/status/1166744195199115264 Other developers also shared their excitement. A Hacker News user commented, “OMG, I can't believe we finally have a cross-platform read/write disk format. At last. No more Fuse. I just need to know when it will be available for my Raspberry Pi.” Read the official announcement by Microsoft to know more in detail. Microsoft Edge Beta is now ready for you to try Microsoft introduces public preview of Azure Dedicated Host and updates its licensing terms CERN plans to replace Microsoft-based programs with an affordable open-source software
Read more
  • 0
  • 0
  • 16918

article-image-apple-previews-macos-catalina-10-15-beta-featuring-apple-music-tv-apps-security-zsh-shell-driverkit-and-much-more
Amrata Joshi
04 Jun 2019
6 min read
Save for later

Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more!

Amrata Joshi
04 Jun 2019
6 min read
Yesterday, Apple previewed the next version of macOS called Catalina, at its ongoing Worldwide Developers Conference (WWDC) 2019. macOS 10.15 or Catalina comes with new features, apps, and technology for developers. With Catalina, Apple is replacing iTunes with entertainment apps such as  Apple PodcastsApple Music, and the Apple TV app. macOS Catalina is expected to be released this fall. Craig Federighi, Apple’s senior vice president of Software Engineering, said, “With macOS Catalina, we’re bringing fresh new apps to the Mac, starting with new standalone versions of Apple Music, Apple Podcasts and the Apple TV app.” He further added, “Users will appreciate how they can expand their workspace with Sidecar, enabling new ways of interacting with Mac apps using iPad and Apple Pencil. And with new developer technologies, users will see more great third-party apps arrive on the Mac this fall.” What’s new in macOS Catalina Sidecar feature Sidecar is a new feature in the macOS 10.15 that helps users in extending their Mac desktop with the help of iPad as a second display or as a high-precision input device across the creative Mac apps. Users can use their iPad for drawing, sketching or writing in any Mac app that supports stylus input by pairing it with an Apple Pencil. Sidecar can be used for editing video with Final Cut Pro X, marking up iWork documents or drawing with Adobe Illustrator iPad app support Catalina comes with iPad app support which is a new way for developers to port their iPad apps to Mac. Previously, this project was codenamed as “Marzipan,” but it’s now called Catalyst. Developers will now be able to use Xcode for targeting their iPad apps at macOS Catalina. Twitter is planning on porting its iOS Twitter app to Mac, and even Atlassian is planning to bring its Jira iPad app to macOS Catalina. Though it is still not clear how many developers are going to support this porting, Apple is encouraging developers to port their iPad apps to the Mac. https://twitter.com/Atlassian/status/1135631657204166662 https://twitter.com/TwitterSupport/status/1135642794473558017 Apple Music Apple Music is a new music app that will help users discover new music with over 50 million songs, playlists, and music videos. Users will now have access to their entire music library, including the songs they have downloaded, purchased or ripped from a CD. Apple TV Apps The Apple TV app features Apple TV channels, personalized recommendations, and more than 100,000 iTunes movies and TV shows. Users can browse, buy or rent and also enjoy 4K HDR and Dolby Atmos-supported movies. It also comes with a Watch Now section that has the Up Next option, where users can easily keep track of what they are currently watching and then resume on any screen. Apple TV+ and Apple’s original video subscription service will be available in the Apple TV app this fall. Apple Podcasts Apple Podcasts app features over 700,000 shows in its catalog and comes with an option for automatically being notified of new episodes as soon as they become available. This app comes with new categories, curated collections by editors around the world and even advanced search tools that help in finding episodes by the host, guest or even discussion topics. Users can now easily sync their media to their devices using a cable in the new entertainment apps. Security In macOS Catalina, Gatekeeper checks all the apps for known security issues, and the new data protections now require all apps to get permission before accessing user documents. Approve that comes with Apple Watch helps users to approve security prompts by tapping the side button on their Apple Watch. With the new Find My app, it is easy to find the location of a lost or stolen Mac and it can be anonymously relayed back to its owner by other Apple devices, even when offline. Macs will be able to send a secure Bluetooth signal occasionally, which will be used to create a mesh network of other Apple devices to help people track their products. So, a map will populate of where the device is located and this way it will be useful for the users in order to track their device. Also, all the Macs will now come with the T2 Security Chip support Activation Lock which will make them less attractive to thieves. DriverKit MacOS Catalina SDK 10.15+ beta comes with DriverKit framework which can be used for creating device drivers that the user installs on their Mac. Drivers built with DriverKit can easily run in user space for improved system security and stability. This framework also provides C++ classes for IO services, memory descriptors, device matching, and dispatch queues. DriverKit further defines IO-appropriate types for numbers, strings, collections, and other common types. You use these with family-specific driver frameworks like USBDriverKit and HIDDriverKit. zsh shell on Mac With macOS Catalina beta, Mac uses zsh as the interactive shell and the default login shell and is available currently only to the members of the Apple Developer Program. Users can now make zsh as the default in earlier versions of macOS as well. Currently, bash is the default shell in macOS Mojave and earlier. Zsh shell is also compatible with the Bourne shell (sh) and bash. The company is also signalling that developers should start moving to zsh on macOS Mojave or earlier. As bash isn’t a modern shell, so it seems the company thinks that switching to something less aging would make more sense. https://twitter.com/film_girl/status/1135738853724000256 https://twitter.com/_sjs/status/1135715757218705409 https://twitter.com/wongmjane/status/1135701324589256704 Additional features Safari now has an updated start page that uses Siri Suggestions for elevating frequently visited bookmarks, sites, iCloud tabs, reading list selections and links sent in messages. macOS Catalina comes with an option to block an email from a specified sender or even mute an overly active thread and unsubscribe from commercial mailing lists. Reminders have been redesigned and now come with a new user interface that makes it easier for creating, organizing and tracking reminders. It seems users are excited about the announcements made by the company and are looking forward to exploring the possibilities with the new features. https://twitter.com/austinnotduncan/status/1135619593165189122 https://twitter.com/Alessio____20/status/1135825600671883265 https://twitter.com/MasoudFirouzi/status/1135699794360438784 https://twitter.com/Allinon85722248/status/1135805025928851457 To know more about this news, check out Apple’s post. Apple proposes a “privacy-focused” ad click attribution model for counting conversions without tracking users Apple Pay will soon support NFC tags to trigger payments U.S. Supreme Court ruled 5-4 against Apple on its App Store monopoly case
Read more
  • 0
  • 0
  • 16904

article-image-to-create-effective-api-documentation-know-how-developers-use-it-says-acm
Bhagyashree R
19 Jul 2019
5 min read
Save for later

To create effective API documentation, know how developers use it, says ACM

Bhagyashree R
19 Jul 2019
5 min read
Earlier this year, the Association for Computing Machinery (ACM) in its January 2019 issue of Communication Design Quarterly (CDQ), talked about how developers use API documentation when getting into a new API and also suggested a few guidelines for writing effective API documentation. Application Programming Interfaces (APIs) are standardized and documented interfaces that allow applications to communicate with each other, without having to know how they are implemented. Developers often turn to API references, tutorials, example projects, and other resources to understand how to use them in their projects. To support the learning process effectively and write optimized API documentation, this study tried to answer the following questions: Which information resources offered by the API documentation developers use and to what extent? What approaches developers take when they start working with a new API? What aspects of the content hinders efficient task completion? API documentation and content categories used in the study The study was done on 12 developers (11 male and 1 female), who were asked to solve a set of pre-defined tasks using an unfamiliar public API. To solve these tasks, they were allowed to refer to only the documentation published by the API provider. The participants used the API documentation about 49% of the time while solving the task. On an individual level, there was not much variation, with the means for all but two participants ranging between 41% and 56%. The most used content category was API reference, followed by the Recipes page. The aggregate time spent on both Recipes and Samples categories was almost equal to the time spent on the API reference category. The Concepts page, however, was used less often as compared to the API reference. Source: ACM “These findings show that the API reference is an important source of information, not only to solve specific programming issues when working with an API developers already have some experience with, but even in the initial stages of getting into a new API, in line with Meng et al. (2018),” the study concludes. How do developers learn a new API The researchers observed two different problem-solving behaviors that were very similar to the opportunistic and systematic developer personas discussed by Clarke (2007). Developers with the opportunistic approach tried to solve the problem in an “exploratory fashion”. They were more intuitive, open to making errors, and often tried solutions without double-checking in the documentation. This group was the one who does not invest much time to get a general overview of the API before starting with the first task. Developers from this group prefer fast and direct access to information instead of large sections of the documentation. On the contrary, developers with the systematic approach tried to first get a deeper understanding of the API before using it. They took some time to explore the API and prepare the development environment before starting with the first task. This group of developers attempted to follow the proposed processes and suggestions closely. They were also able to notice parts of the documentation that were not directly relevant to the given task. What aspects of API documentation make it hard for developers to complete tasks efficiently? Lack of transparent navigation and search function Some participants felt that the API documentation lacked a consistent system of navigation aids and did not offer side navigation including within-page links. Developers often required a search function when they were missing a particular piece of information, such as a term they did not know. As the documentation used in the test did not offer a search field, developers had to use a simple page search instead, which was often unsuccessful. Issues with high-level structuring of API documentation The participants observed several problems in the high-level structuring of the API documentation, that is, the split of information in Concepts, Samples, API reference, and so on. For instance, to search for a particular piece of information, participants sometimes found it difficult to decide which content category to select. It was particularly unclear how the content provided in the Samples and Recipes were distinct. Unable to reuse code examples Most of the time participants developed their solution using the sample code provided in the documentation. However, the efficient use of sample code was hindered because of the presence of placeholders in the code referencing some other code example. Few guidelines for writing efficient API documentation Organizing the content according to API functionality: The API documentation should be divided into categories that reflect the functionality or content domain of the API. So participants would have found it more convenient if instead of dividing documentation into “Samples,” “Concepts,” “API reference” and “Recipes,” the API used categories such as “Shipment Handling,” “Address Handling” and so on. Enabling efficient access to relevant content: While designing API documentation, it is important to take specific measures for improved accessibility to content that is relevant to the task at hand. This can be done by organizing the content according to API functionality, presenting conceptual information integrated with related tasks, and providing transparent navigation and powerful search function. Facilitating initial entry into the API: For this, you need to identify appropriate entry points into the API and relate particular tasks to specific API elements. Provide clean and working code examples, provide relevant background knowledge, and connect concepts to code. Supporting different development strategies: While creating the API documentation, you should also keep in mind the different strategies that developers adopt when approaching a new API. Both the content and the way it is presented should serve the needs of both opportunistic and systematic developers. These were some observations and implications from the study. To know more, read the paper: How Developers Use API Documentation: An Observation Study. GraphQL API is now generally available Best practices for RESTful web services: Naming conventions and API Versioning [Tutorial] Stripe’s API suffered two consecutive outages yesterday causing elevated error rates and response times
Read more
  • 0
  • 0
  • 16902

article-image-openssh-7-8-released
Melisha Dsouza
27 Aug 2018
4 min read
Save for later

OpenSSH 7.8 released!

Melisha Dsouza
27 Aug 2018
4 min read
OpenSSH 7.8 base source code was released on August 24, 2018. It includes many new features such as a fix for the username enumeration vulnerability, changes to the default format for the private key file, and many more. Additionally, support for running ssh setuid root has been removed, and a couple of new signature algorithms have been added. The base source code is designed specifically for OpenBSD. The aim was to make the code simple, clean, minimal, and auditable. This release will be available from the mirrors listed at http://www.openssh.com/ shortly. Let’s take a look at the features that developers can expect in this new version of OpenSSH Changes that may affect existing configurations ssh-keygen(1): Write OpenSSH format private keys by default instead of using OpenSSL's PEM format. This offers better protection against offline password guessing and supports key comments in private keys. sshd(8): Internal support for S/Key multiple factor authentication is removed. S/Key may still be used via PAM or BSD auth. ssh(1): Vestigal support for running ssh(1) as setuid is removed. sshd(8): The semantics of PubkeyAcceptedKeyTypes and HostbasedAcceptedKeyTypes now specify signature algorithms that are accepted for their respective authentication mechanism. This matters when using the RSA/SHA2 signature algorithms "rsa-sha2-256", "rsa-sha2-512" and their certificate counterparts. Configurations that override these options but do not use these algorithm names may cause unexpected authentication failures. sshd(8): The precedence of session environment variables has changed. ~/.ssh/environment and environment="..." options in authorized_keys files can no longer override SSH_* variables set implicitly by sshd. ssh(1)/sshd(8): The default IPQoS used by ssh/sshd has changed.Interactive traffic will use  DSCP AF21and CS1 will be used  for bulk. For a detailed understanding, head over to the commit message: https://cvsweb.openbsd.org/src/usr.bin/ssh/readconf.c#rev1.28 What's new in OpenSSH 7.8 This  bugfix release has a couple of New Features in store for developers. Let’s take a look at some of the important ones. New signature algorithms "rsa-sha2-256-cert- v01@openssh.com" and "rsa-sha2-512-cert-v01@openssh.com" to  explicitly force use of RSA/SHA2 signatures in authentication. Read more at  ssh(1)/sshd(8). Some countermeasures are added against timing attacks used for account validation/enumeration. sshd will impart a minimum time or each failed authentication attempt consisting of a global 5ms minimum plus an additional per-user 0-4ms delay derived from a host secret. Fine more information at sshd(8). In sshd(8), you can add a SetEnv directive to explicitly specify environment variables in sshd_config by an administrator. Variables set by SetEnv override the default and client-specified Environment. In ssh(1), you can add a SetEnv directive to request that the server sets an environment variable in the session. Similar to the existing SendEnv option, these variables are set subject to server Configuration. Clear environment variables previously marked for sending to the server by "SendEnv -PATTERN" Bug Fixes introduced in this new version In the sshd(8), users can avoid observable differences in request parsing that could be used to determine whether a target user is valid. They can also fix failures to read authorized_keys caused by faulty supplemental group caching. Failures can be fixed to read authorized_keys caused by faulty supplemental group caching. The relax checking of authorized_keys environment="..." options to allow underscores in variable names  (regression introduced in 7.7) Some memory leaks in the ssh(1)/sshd(8) have been fixed. The SSH2_MSG_DEBUG messages for Twisted Conch clients in the ssh(1)/sshd(8) have also been disabled. Tunnel forwarding has also been fixed. In ssh(1), you can now fix a pwent clobber (introduced in openssh-7.7) that could occur during key loading, manifesting as crash on some platforms. To get a detailed overview of the features and changes introduced in portability and checksums in this new release, head over to the official release notes. JavaFX 11 to release soon, announces the Gluon team Gitlab 11.2 releases with preview changes in Web IDE, Android Project Import and more Bodhi Linux 5.0.0 released with updated Ubuntu core 18.04 and a modern look
Read more
  • 0
  • 0
  • 16895
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-ubuntu-has-decided-to-drop-i386-32-bit-architecture-from-ubuntu-19-10-onwards
Vincy Davis
19 Jun 2019
4 min read
Save for later

Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards

Vincy Davis
19 Jun 2019
4 min read
Update: Five days after the announcement of dropping the i386 structure, Steve Langasek has now changed his stance. Yesterday, 23rd June, Langasek apologised to his users and said that this is not the case. He now claims that Ubuntu is only dropping the updates to the i386 libraries, and it will be frozen at the 18.04 LTS versions. He also mentioned that they are planning to have i386 applications including games, for versions of Ubuntu later than 19.10. This update comes after Valve Linux developer Pierre-Loup Griffais tweeted on 21st June that Steam will not support Ubuntu 19.10 and its future releases. He also recommended its users the same. Griffais has stated that are planning to switch to a different distribution and are evaluating ways to minimize breakage for their users. https://twitter.com/Plagman2/status/1142262103106973698 Between all the uncertainties of i386, Wine developers have also raised their concern because many 64-bit Windows applications still use a 32-bit installer, or some 32-bit components. Rosanne DiMesio, one of the admins in Wine’s Applications Database (AppDB) and Wiki, has said in a mail archive that there are many possibilities, such as building a pure 64 bit Wine packages for Ubuntu. Yesterday the Ubuntu engineering team announced their decision to discontinue i386 (32-bit) as an architecture, from Ubuntu 19.10 onwards. In a post to the Ubuntu Developer Mailing List, Canonical’s Steve Langasek explains that “i386 will not be included as an architecture for the 19.10 release, and we will shortly begin the process of disabling it for the eoan series across Ubuntu infrastructure.” Langasek also mentions that the specific distributions of builds, packages or distributes of the 32-bit software, libraries or tools will no longer work on the newer versions of Ubuntu. He also mentions that the Ubuntu team will be working on the 32-bit support, over the course of the 19.10 development cycle. The topic of dropping i386 systems has been in discussion among the Ubuntu developer community, since last year. One of the mails in the archive, mentions that “Less and less non-amd64-compatible i386 hardware is available for consumers to buy today from anything but computer parts recycling centers. The last of these machines were manufactured over a decade ago, and support from an increasing number of upstream projects has ended.” Earlier this year, Langasek stated in one of his mail archives that running a 32-bit i386 kernel on the recent 64-bit Intel chips had a risk of weaker security than using a 64-bit kernel. Also the usage of i386 has reduced broadly in the ecosystem, and hence it is “increasingly going to be a challenge to keep software in the Ubuntu archive buildable for this target”, he adds. Langasek also informed users that the automated upgrades to 18.10 are disabled on i386. This has been done to enable users of i386 to stay on the LTS, as it will be supported until 2023. This will help users to not be stranded on a non-LTS release, which will be supported only until early 2021. The general reaction to this news has been negative. Users have expressed outrage on the discontinuity of i386 architecture. A user on Reddit says that “Dropping support for 32-bit hosts is understandable. Dropping support for 32 bit packages is not. Why go out of your way to screw over your users?” Another user comments, “I really truly don't get it. I've been using ubuntu at least since 5.04 and I'm flabbergasted how dumb and out of sense of reality they have acted since the beginning, considering how big of a headstart they had compared to everyone else. Whether it's mir, gnome3, unity, wayland and whatever else that escapes my memory this given moment, they've shot themselves in the foot over and over again.” On Hacker News, a user commented “I have a 64-bit machine but I'm running 32-bit Debian because there's no good upgrade path, and I really don't want to reinstall because that would take months to get everything set up again. I'm running Debian not Ubuntu, but the absolute minimum they owe their users is an automatic upgrade path.” Few think that this step was needed, for the sake of riddance. Another Redditor adds,  “From a developer point of view, I say good riddance. I understand there is plenty of popular 32-bit software still being used in the wild, but each step closer to obsoleting 32-bit is one step in the right direction in my honest opinion.” Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32 Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users
Read more
  • 0
  • 0
  • 16893

article-image-amazon-managed-streaming-for-apache-kafka-amazon-msk-is-now-generally-available
Amrata Joshi
03 Jun 2019
3 min read
Save for later

Amazon Managed Streaming for Apache Kafka (Amazon MSK) is now generally available

Amrata Joshi
03 Jun 2019
3 min read
Last week, Amazon Web Services announced the general availability of Amazon Managed Streaming for Apache Kafka (Amazon MSK). Amazon MSK makes it easy for developers to build and run applications based on Apache Kafka without having to manage the underlying infrastructure. It is fully compatible with Apache Kafka that enables customers to easily migrate their on-premises or Amazon Elastic Cloud Compute (Amazon EC2) clusters to Amazon MSK without code changes. Customers can use Apache Kafka for capturing and analyzing real-time data streams from a range of sources, including database logs, IoT devices, financial systems, and website clickstreams. Many customers choose to self-manage their Apache Kafka clusters and they end up spending their time and cost in securing, scaling, patching, and ensuring high availability for Apache Kafka and Apache ZooKeeper clusters. But Amazon MSK offers attributes of Apache Kafka that are combined with the availability, security, and scalability of AWS. Customers can now create Apache Kafka clusters that are designed for high availability that span multiple Availability Zones (AZs) with few clicks. Amazon MSK also monitors the server health and automatically replaces servers when they fail. Customers can now easily scale out cluster storage in the AWS management console to meet changes in demand. Amazon MSK runs the Apache ZooKeeper nodes at no additional cost and provides multiple levels of security for Apache Kafka clusters which include VPC network isolation, AWS Identity and Access Management (IAM), etc. It allows customers to continue to run applications built on Apache Kafka and allow them to use Apache Kafka compatible tools and frameworks. General Manager of Amazon MSK, AWS, Rajesh Sheth, wrote to us in an email, "Customers who are running Apache Kafka have told us they want to spend less time managing infrastructure and more time building applications based on real-time streaming data.” He further added, “Amazon MSK gives these customers the ability to run Apache Kafka without having to worry about managing the underlying hardware, and it gives them an easy way to integrate their Apache Kafka applications with other AWS services. With Amazon MSK, customers can stand up Apache Kafka clusters in minutes instead of weeks, so they can spend more time focusing on the applications that impact their businesses.” Amazon MSK is currently available in the US East (Ohio), US East (N. Virginia), US West (Oregon), EU (Ireland), EU (Frankfurt), EU (Paris), EU (London), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Sydney), and will expand to additional AWS Regions in the next year. Amazon rejects all 11 shareholder proposals including the employee-led climate resolution at Annual shareholder meeting Amazon to roll out automated machines for boxing up orders: Thousands of workers’ job at stake Amazon resists public pressure to re-assess its facial recognition business; “failed to act responsibly”, says ACLU
Read more
  • 0
  • 0
  • 16889

article-image-openjdk-project-valhallas-lw2-early-access-builds-are-now-available-for-you-to-test
Bhagyashree R
09 Jul 2019
3 min read
Save for later

OpenJDK Project Valhalla’s LW2 early access builds are now available for you to test

Bhagyashree R
09 Jul 2019
3 min read
Last week, the early access builds for OpenJDK Project Valhalla's LW2 phase was released, which was first proposed in October last year. LW2 is the next iteration of the L-World series that brings further language and JDK API support for inline types. https://twitter.com/SimmsUpNorth/status/1147087960212422658 Proposed in 2014, Project Valhalla is an experimental OpenJDK project under which the team is working on major new language features and enhancements for Java 10 and beyond. The new features and enhancements are being done in the following focus areas: Value Types Generic Specialization Reified Generics Improved 'volatile' support The LW2 specifications Javac source support Starting from LW2, the prototype is based on mainline JDK (currently version 14). That is why it requires source-level >= JDK14. To make a class declaration of inline type it uses the “inline class’ modifier or ‘@__inline__’ annotation. Interfaces, annotation types, or enums cannot be declared as inline types. The top-level, inner, or local classes may be inline types. As inline types are implicitly final, they cannot be abstract. Also, all instance fields of an inline class are implicitly final. Inline types implicitly extend ‘java.lang.Object’ similar to enums, annotation types, and interfaces. Supports "Indirect" projections of inline types via the "?" operator. javac now allows using ‘==’ and ‘!=’ operators to compare inline type. Java APIs Among the new or modified APIs include ‘isInlineClass()’, ‘asPrimaryType()’, ‘asIndirectType()’, ‘isIndirectType()’, ‘asNullableType()’, and ‘isNullableType()’. Now the ‘getName()’ method reflects the Q or L type signatures for arrays of inline types. Using ‘newInstance()’ on an inline type will throw ‘NoSuchMethodException’ and ‘setAccessible()’ will throw ‘InaccessibleObjectException’. With LW2, initial core Reflection and VarHandles support are in place. Runtime When attempting to synchronize or call wait(*) or notify*() on an inline type IllegalMonitorException will be thrown. ‘ClassCircularityError’ is thrown if loading an instance field of an inline type which declares its own type either directly ‘NotSerializableException’ will be thrown if you are attempting to serialize an inline type. If you are casting from indirect type to inline type, it may result in ‘NullPointerException’. Download the early access binaries to test this prototype. These were some of the specifications of LW2 iteration. Check out the full list of specification at OpenJDK’s official website. Also, stay tuned with the current happenings in Project Valhalla. Getting started with Z Garbage Collector(ZGC) in Java 11 [Tutorial] Storm 2.0.0 releases with Java enabled architecture, new core and streams API, and more Firefox 67 will come with faster and reliable JavaScript debugging tools
Read more
  • 0
  • 0
  • 16872

article-image-introducing-tensorflow-graphics-packed-with-tensorboard-3d-object-transformations-and-much-more
Amrata Joshi
10 May 2019
3 min read
Save for later

Introducing TensorFlow Graphics packed with TensorBoard 3D, object transformations, and much more

Amrata Joshi
10 May 2019
3 min read
Yesterday, the team at TensorFlow introduced TensorFlow Graphics. A computer graphics pipeline requires 3D objects and their positioning in the scene, and a description of the material they are made of, lights and a camera. This scene description then gets interpreted by a renderer for generating a synthetic rendering. In contrast, a computer vision system starts from an image and then tries to infer the parameters of the scene. This also allows the prediction of which objects are in the scene, what materials they are made of, and their three-dimensional position and orientation. Developers usually require large quantities of data to train machine learning systems that are capable of solving these complex 3D vision tasks.  As labelling data is a bit expensive and complex process, so it is better to have mechanisms to design machine learning models. They can easily comprehend the three dimensional world while being trained without much supervision. By combining computer vision and computer graphics techniques we get to leverage the vast amounts of unlabelled data. For instance, this can be achieved with the help of analysis by synthesis where the vision system extracts the scene parameters and the graphics system then renders back an image based on them. In this case, if the rendering matches the original image, which means the vision system has accurately extracted the scene parameters. Also, we can see that in this particular setup, computer vision and computer graphics go hand-in-hand. This also forms a single machine learning system which is similar to an autoencoder that can be trained in a self-supervised manner. Image source: TensorFlow We will now explore some of the functionalities of TensorFlow Graphics. Object transformations Object transformations are responsible for controlling the position of objects in space. The axis-angle formalism is used for rotating a cube and the rotation axis points up to form a positive which leads the cube to rotate counterclockwise. This task is also at the core of many applications that include robots that focus on interacting with their environment. Modelling cameras Camera models play a crucial role in computer vision as they influence the appearance of three-dimensional objects projected onto the image plane. For more details about camera models and a concrete example of how to use them in TensorFlow, check out the Colab example. Material models Material models are used to define how light interacts with objects to give them their unique appearance. Some materials like plaster and mirrors usually reflect light uniformly in all directions. Users can now play with the parameters of the material and the light to develop a good sense of how they interact. TensorBoard 3d TensorFlow Graphics features a TensorBoard plugin to interactively visualize 3d meshes and point clouds. Through which visual debugging is also possible that helps to assess whether an experiment is going in the right direction. To know more about this news, check out the post on Medium. TensorFlow 1.13.0-rc2 releases! TensorFlow 1.13.0-rc0 releases! TensorFlow.js: Architecture and applications  
Read more
  • 0
  • 0
  • 16799
article-image-what-the-ieee-2018-programming-languages-survey-reveals-to-us
Savia Lobo
19 Aug 2018
7 min read
Save for later

What the IEEE 2018 programming languages survey reveals to us

Savia Lobo
19 Aug 2018
7 min read
Programming languages are the foundations of all the existing technology that we are surrounded with. Developers, tech enthusiasts, and others keep themselves updated with the latest programming languages to be abreast with the advancements happening within each of it. Popular survey websites such as TIOBE, Redmonk, StackOverflow, IEEE spectrum, etc. help people to know about the trending top programming languages and where their favorite language stands. Out of these, the IEEE spectrum and StackOverflow showcase their ranking surveys annually. Whereas TIOBE does it every month and Redmonk does it semi-annually. From the two annual survey providers, Stack Overflow takes in surveys from 56,033 coders in 173 countries  whereas IEEE spectrum’s survey synthesizes rankings from 10 sources including Google search of “X programming” Google Trends Twitter GitHub StackOverflow Reddit Hacker News CareerBuilder Dice IEEE Xplore Digital Library The IEEE spectrum aggregates different kinds of statistical data with a view to generate the most reliable ranking. It also gives the most personalized ranking. The interactive interface allows readers to filter by search trends, job trends, or open source community trends. You can even modify the weighting of each dimension, enabling an extremely personalized ranking. Of the five popular language ranking surveys and our own Packt’s Skill Up survey 2018, the top 10 programming languages for this year include, Top 10 languages across popular surveys     Stack Overflow Redmonk TIOBE IEEE Spectrum Packt Skill Up Survey JavaScript JavaScript Java Python Java HTML Java C C++ JavaScript CSS Python C++ C Python SQL PHP Python Java C# Java C# Visual Basic C# SQL Bash/Shell C++ C# PHP C++ Python CSS PHP R C C# Ruby JavaScript JavaScript PHP PHP C SQL Go Swift C++ Swift Assembly Assembly Go Our Takeaways from IEEE survey What was obvious Python in the top 3: Python has been bagging the top position at the IEEE spectrum for two years in continuation now. It is the easiest programming language of all with an easy-going syntax. However, IEEE mentions the reason for Python to be at the zenith is because it is now listed as an embedded language. Go in the top 10 list: Google’s Go has risen from the 7th position last year to the 5th this year. Its speed, simplicity, reliability, cross-platform ability, native concurrency, easy deployment makes it the go-to cloud-native language for developers. Thus, making it the fastest growing programming language. Java, C++, C, C# in the top 5: These legendary languages are still in the top 5 due to its large scale industry-wide adoption and an established community. Also, many professional developers have been working on these languages since years and find it difficult to migrate to any new programming language making these stay at the top. R language drops down a notch: The language for statistics and big data, R has stepped down from its 6th position to a 7th position. R’s decline could be due to the popularity of Python due to the high-quality Python libraries for both statistics and machine learning. This makes statistics and big data more flexible to turn to Python than the more specialized R. What was surprising? Kotlin language not included in the list: The recently popular programming language for Android development is missing from IEEE’s survey list. Many developers use Kotlin instead of Python and Java for internal app development (console apps, OpenGL-apps, threaded socket servers, etc). Kotlin also eases porting of code from Python to Kotlin. Many promising languages missing from the IEEE list: Languages such as Typescript, Dart are missing. Typescript is the superset of JavaScript, which lacks a type system. The introduction of Typescript adds optional static typing to Javascript. Similarly, Dart is the also a useful language and can be used to program front-end applications. It is easy to use with a non-existent learning curve. Matlab and Assembly languages maintain their positions: Matlab is used for scientific computing and mathematical processing. First released in 1984, it is one of the oldest languages after Assembly still maintaining the 11th position in this list. It is widely used in Academics and research and hence is never outdated. Similarly, Assembly, the oldest form of programming, at the 10th position is still relevant to many developers. This is because it supports fast code with the absence of a compiler and is the best bet for machine level programming. Javascript not in the top 5: Being one of the dominant languages on the web front-end development, JavaScript is at the 8th position in IEEE’s list. This must be because other languages such as TypeScript and WebAssembly are providing an easy way to C/C++ developers What we are skeptical about/don’t agree with PHP might not be in the top 10: PHP is one of the most popular languages for server-side programming. Other programming languages such as Python and Ruby on Rails are competing with PHP by providing a much more simple, useful and powerful coding syntax and tools in the same domain as PHP. Ruby might drop down a few more notches: Although Ruby was the first full-stack language to be used on front and back-end development, it is difficult to learn. Integrating third-party libraries on ruby is also difficult which makes it non-flexible. As there are several options in the market today, I am skeptical Ruby will maintain its current position. Is Swift dropping from its position: Swift programming language was built by Apple Inc. for iOS, macOS, watchOS, and tvOS. Being an Apple-only development environment, developers are moving to multi-platform mobile apps such as Microsoft’s Xamarin, Apache Cordova, and Ionic. This may affect Swift’s user community. Limitations of the IEEE survey The IEEE Spectrum 2018 survey included 47 programming languages ranging from the most widely adopted to the least. However, not all the programming languages were a part of this list. Current popular languages such as Kotlin, Dart, TypeScript, WebAssembly and some others were missing from the list. As per some comments on the IEEE blog, IEEE uses the languages listed in Github. On Github, Visual basic is the common name used for both vb.net and Visual Basic. Also, some languages present in the other surveys are not present in the IEEE survey. For instance, the TIOBE index has PL/SQL at the 20th position. However, the IEEE survey has not mentioned about it. One more limitation it had was, it showed completely different results on different browsers, which Stephen Cass from IEEE spectrum said, “ I'd say it's due to variations in how JQuery/JavaScript is implemented in the different browsers: under the hood, the TPL uses a lot of floating point math, so what you are seeing could be due to differences in precision/rounding, et cetera. Ultimately, I suspect the solution will be to calculate the rankings completely server-side: the underlying code for the TPL is five years old, so we were thinking of overhauling it anyway, and this certainly puts some weight behind that.” Stephen further added, “I should add that we built the TPL primarily using Chrome, so our canonical version of the rankings is the one you see in that browser.” Read more about the other programming languages by IEEE Spectrum in the IEEE blog post Rust 1.28 is here with global allocators, nonZero types and more Racket v7.0 is out with overhauled internals, updates to DrRacket, TypedRacket among others Grain: A new functional programming language that compiles to Webassembly
Read more
  • 0
  • 0
  • 16785

article-image-paypal-replaces-flow-with-typescript-as-their-type-checker-for-every-new-web-app
Bhagyashree R
22 Jan 2019
2 min read
Save for later

PayPal replaces Flow with TypeScript as their type checker for every new web app

Bhagyashree R
22 Jan 2019
2 min read
Yesterday, Kent C. Dodds, a JavaScript engineer at PayPal, shared in a post that now every app created at PayPal uses TypeScript by default replacing its previous type checker, Flow.  He also shared why it took them so much time to migrate to TypeScript and what are the drawbacks of using Flow which TypeScript solves. Dodds works on a toolkit called paypal-scripts, which is a package of all the tools common to PayPal applications and published modules. It was created to replace the huge list of devDependencies in the package.json and all the config files with a single entry in the devDependencies. Keeping all the tools and config in a single package, also made updating very easier. Now, this paypal-scripts module is also merged with their base GitHub repo named “sample-app”, to ensure that every new application will get their start with modern technology and tools. These applications will also be statically typed with TypeScript and tested with Jest. It took Dodds so long to adopt TypeScript because he was hesitant towards leaving Babel and ESLint. He was using these tools for several years and enjoyed building custom plugins for both. Also, earlier, TypeScript users faced some challenges when using Babel and ESLint. A common theme was that Babel users found it difficult to set up TypeScript. The linting experience also needed some improvement, so the TypeScript team started working on improving TypeScript’s compatibility for ESLint. This meant for Dodds that he did not have to give up these tools to adopt TypeScript, and that is why he decided to replace Flow with TypeScript. Dodds mentions that the regular unreliability of Flow made him take this decision. Explaining the challenges, he wrote, “The editor plugins only sometimes worked (full disclosure, I never tried Nuclide and maybe my life would’ve been different if I had, but I tried Flow in Atom and VSCode) and I would get issues like the one all the time. It was incredibly frustrating because I could never trust my type checker. There were other issues as well.” Read more in detail on Kent C. Dodds’ post: Why every new web app at PayPal starts with TypeScript. Future of ESLint support in TypeScript The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0 Announcing ‘TypeScript Roadmap’ for January 2019- June 2019
Read more
  • 0
  • 0
  • 16767

article-image-facebook-ai-introduces-aroma-a-new-code-recommendation-tool-for-developers
Natasha Mathur
09 Apr 2019
3 min read
Save for later

Facebook AI introduces Aroma, a new code recommendation tool for developers

Natasha Mathur
09 Apr 2019
3 min read
Facebook AI team announced a new tool, called Aroma, last week. Aroma is a code-to-code search and recommendation tool that makes use of machine learning (ML) to simplify the process of gaining insights from big codebases. Aroma allows engineers to find common coding patterns easily by making a search query without any need to manually browse through code snippets. This, in turn, helps save time in their development workflow. So, in case a developer has written code but wants to see how others have implemented the same code, he can run the search query to find similar code in related projects. After the search query is run, results for codes are returned as code ‘recommendations’. Each code recommendation is built from a cluster of similar code snippets that are found in the repository. Aroma is a more advanced tool in comparison to the other traditional code search tools. For instance, Aroma performs the search on syntax trees. Instead of looking for string-level or token-level matches, Aroma can find instances that are syntactically similar to the query code. It can then further highlight the matching code by cutting down the unrelated syntax structures. Aroma is very fast and creates recommendations within seconds for large codebases. Moreover, Aroma’s core algorithm is language-agnostic and can be deployed across codebases in Hack, JavaScript, Python, and Java. How does Aroma work? Aroma follows a three-step process to make code recommendations, namely, Feature-based search, re-ranking and clustering, and intersecting. For feature-based search, Aroma indexes the code corpus as a sparse matrix. It parses each method in the corpus and then creates its parse tree. It further extracts a set of structural features from the parse tree of each method. These features capture information about variable usage, method calls, and control structures. Finally, a sparse vector is created for each method according to its features and then the top 1,000 method bodies whose dot products are highest are retrieved as the candidate set for the recommendation. Aroma In the case of re-ranking and clustering, Aroma first reranks the candidate methods by their similarity to the query code snippet. Since the sparse vectors contain only abstract information about what features are present, the dot product score is an underestimate of the actual similarity of a code snippet to the query. To eliminate that, Aroma applies ‘pruning’ on the method syntax trees. This helps to discard the irrelevant parts of a method body and helps retain all the parts best match the query snippet. This is how it reranks the candidate code snippets by their actual similarities to the query. Further ahead, Aroma runs an iterative clustering algorithm to find clusters of code snippets similar to each other and consist of extra statements useful for making code recommendations. In the case of intersecting, a code snippet is taken first as the “base” code and then ‘pruning’ is applied iteratively on it with respect to every other method in the cluster. The remaining code after the pruning process is the code which is common among all methods, making it a code recommendation. “We believe that programming should become a semiautomated task in which humans express higher-level ideas and detailed implementation is done by the computers themselves”, states Facebook AI team. For more information, check out the official Facebook AI blog. How to make machine learning based recommendations using Julia [Tutorial] Facebook AI open-sources PyTorch-BigGraph for faster embeddings in large graphs Facebook AI research and NYU school of medicine announces new open-source AI models and MRI dataset
Read more
  • 0
  • 0
  • 16752
article-image-llvm-will-be-relicensing-under-apache-2-0-start-of-next-year
Prasad Ramesh
18 Oct 2018
3 min read
Save for later

LLVM will be relicensing under Apache 2.0 start of next year

Prasad Ramesh
18 Oct 2018
3 min read
After efforts since last year, LLVM, the set of compiler building tools is closer towards an Apache 2.0 license. Currently, the project has its own open source licence created by the LLVM team. This is a move to go forward with Apache 2.0 based on the mailing list discussions. Why the shift to Apache 2.0? The current licence is a bit vague and was not very welcoming to contributors and had some patent issues. Hence, they decided to shift to the industry standard Apache 2.0. The new licence was drafted by Heather Meeker, the same lawyer who worked on the Commons Clause. The goals of the relicensing as listed on their website are: Encourage ongoing contributions to LLVM by preserving a low barrier to entry for contributors. Protect users of LLVM code by providing explicit patent protection in the license. Protect contributors to the LLVM project by explicitly scoping their patent contributions with this license. Eliminate the schism between runtime libraries and the rest of the compiler that makes it difficult to move code between them. Ensure that LLVM runtime libraries may be used by other open source and proprietary compilers. The plan to shift LLVM to Apache 2.0 The relicence is not just Apache 2.0, the license header reads “Apache License v2.0 with LLVM Exceptions”. The exceptions are related to compiling source code. To know more about the exceptions follow the mailing list. The team plans to install the new license and the developer policy that references the new and old licenses. At this point, all subsequent contributions will be under both these licenses. They have a two-fold plan to ensure the contributors are aware. They’re going to ask many active contributors (both enterprises and individuals) to explicitly sign an agreement to relicense their contributions. Signing will make the change clear and known while also covering historical contributions. For any other contributors, their commit access will be revoked until the LLVM organization can confirm that they are covered by one of the agreements. The agreements For the plan to work, both individuals and companies need to sign an agreement to relicense. They have built a process for both companies and individuals. Individuals Individuals will have to fill out a form with the necessary information like email addresses, potential employers, etc. to effectively relicense your contributions. The form contains a link to a DocuSign agreement to relicense any of your individual contributions under the new license. Signing the document will make things easier as it will avoid confusion in contributions and if it is covered by some company. The form and agreement is available on Google forms. Companies There is a DocuSign agreement for companies too. Some companies like Argonne National Laboratory and Google have already signed the agreement. There will be no explicit copyright notice as they don’t feel it is worthwhile. The current planned timeline is to install the new developer policy and the new license after LLVM 8.0 release in January 2019. For more details, you can read the mail. A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM 7.0.0 released with improved optimization and new tools for monitoring OpenMP, libc++, and libc++abi, are now part of llvm-toolchain package
Read more
  • 0
  • 0
  • 16743

article-image-why-was-rust-chosen-for-libra-us-congressman-questions-facebook-on-libra-security-design-choices
Sugandha Lahoti
22 Jul 2019
6 min read
Save for later

“Why was Rust chosen for Libra?”, US Congressman questions Facebook on Libra security design choices

Sugandha Lahoti
22 Jul 2019
6 min read
Last month, Facebook announced that it’s going to launch its own cryptocurrency, Libra and Calibra, a payment platform that sits on top of the cryptocurrency, unveiling its plans to develop an entirely new ecosystem for digital transactions. It also developed a new programming language, “Move” for implementing custom transaction logic and “smart contracts” on the Libra Blockchain. The Move language is written entirely in Rust. Although Facebook’s media garnered a massive media attention and had investors and partners from the likes of PayPal, loan platform Kiva, Uber, and Lyft, it had its own share of concerns. The US administration is worried about a non-governmental currency in the hands of big tech companies. Early July, the US congress asked Facebook to suspend the implementation of Libra until the ramifications were investigated. Last week, at the U.S. House Committee on Financial Services hearing, investigating Libra’s security related challenges, Congressman Denver Riggleman posed an important question to David Marcus, head of Calibra, asking why the Rust language was chosen for Libra. Riggleman: I was really surprised about the Rust language. So my first question is, why was the Rust language chosen as the implementation language for Libra? Do you believe it's mature enough to handle the security challenges that will affect these large cryptocurrency transactions? Marcus: The Libra association will own the repository for the code. While there are many flavors and branches being developed by third parties, only safe and verified code will actually be committed to the actual Libra code base which is going to be under the governance of the Libra association. Riggleman: It looks like Libra was built on the nightly build of the Rust programming language. It's interesting because that's not how we did releases at the DoD. What features of Rust are only available in the nightly build that aren't in the official releases of Rust? Does Facebook see it as a concern that they are dependent on unofficially released features of the Rust programming language? Why the nightly releases? Do you see this as a function of the prototyping phase of this? Marcus: Congressman, I don’t have the answers to your very technical questions but I commit that we will get back to you with more details on your questions. Marcus appeared before two US congressional hearing sessions last week where he was constantly grilled by legislators. The grilling led to a dramatic alteration in the strategy of Libra. Marcus has clarified that Facebook won't move forward with Libra until all concerns are addressed. The original vision of Facebook with Libra was to be an open and largely decentralized network which would be beyond the reach of regulators. Instead, regulatory compliance would be the responsibility of exchanges, wallets, and other services called the Libra association. Post the hearing Marcus has stated that the Libra Association would have a deliberately limited role in regulatory matters. Per ArsTechnica, Calibra, would follow US regulations on consumer protection, money laundering, sanctions, and so forth. But Facebook didn't seem to have plans for the Libra Association, Facebook, or any associated entity to police illegal activity on the Libra network as a whole. This video clipping sparked quite the discussion on Hacker News and Reddit with people applauding the QnA session. Some appreciated that legislators are now asking tough questions like these. “It's cool to see a congressman who has this level of software dev knowledge and is asking valid questions.” “Denver Riggleman was an Air Force intelligence officer for 11 years, then he became an NSA contractor. I'm not surprised he's asking reasonable questions.” “I don't think I've ever heard of a Congressman going to GitHub, poking around in some open source code, and then asking very cogent and relevant questions about it. This video is incredible if only because of that.” Others commented on why Congress may have trust issues with using a young programming language like Rust for something like Libra, which requires layers of privacy and security measures. “Traditionally, government people have trust issues with programming languages as the compiler is, itself, an attack vector. If you are using a nightly release of the compiler, it may be assumed by some that the compiler is not vetted for security and could inject unstable or malicious code into another critical codebase. Also, Rust is considered very young for security type work, people rightly assume there are unfound weaknesses due to the newness of the language and related libraries”, reads one comment from Hacker News. Another adds, “Governments have issues with non-stable code because it changes rapidly, is untested and a security risk. Facebook moves fast and break things.” Rust was declared as the most loved programming language by developers in the Stack Overflow survey 2019. This year more or less most major platforms have  jumped on the bandwagon of writing or rewriting its components in the Rust programming language. Last month, post the release of Libra, Calibra tech lead Ben Maurer took to Reddit to explain why Facebook chose the programming language Rust. Per Maurer, “As a project where security is a primary focus, the type-safety and memory-safety of Rust were extremely appealing. Over the past year, we've found that even though Rust has a high learning curve, it's an investment that has paid off. Rust has helped us build a clean, principled blockchain implementation. Part of our decision to choose Rust was based on the incredible momentum this community has achieved. We'll need to work together on challenges like tooling, build times, and strengthening the ecosystem of 3rd-party crates needed by security-sensitive projects like ours.” Not just Facebook, last week, Microsoft announced plans to replace their C and C++ code with Rust calling it a “modern safer system programming language” with great memory safety features. In June, Brave ad-blocker also released a new engine written in Rust which gives 69x better performance. Airbnb has introduced PyOxidizer, a Python application packaging and distribution tool written in Rust. “I’m concerned about Libra’s model for decentralization”, says co-founder of Chainspace, Facebook’s blockchain acquisition Facebook launches Libra and Calibra in a move to seriously disrupt the financial sector Facebook releases Pythia, a deep learning framework for vision and language multimodal research
Read more
  • 0
  • 0
  • 16698
Modal Close icon
Modal Close icon