Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-net-for-apache-spark-preview-is-out-now
Amrata Joshi
25 Apr 2019
3 min read
Save for later

.NET for Apache Spark Preview is out now!

Amrata Joshi
25 Apr 2019
3 min read
Yesterday, at the Spark + AI summit, the team at Apache Spark announced .NET for Apache Spark, a popular open source distributed processing engine used for analytics over large data sets. It can also be used for processing real-time streams, batches of data, machine learning, and ad-hoc query. .NET fo Apache Spark for developers .NET for Apache Spark aims at making Apache Spark accessible to .NET developers across all Spark APIs. The team at Apache Spark aims to develop .NET for Apache Spark in the open (as a .NET Foundation member project) along with the Spark and .NET community for the developers. .NET for Apache Spark comes with high-performance APIs for using Spark from C# and F#. With .NET APIs, users can now access all aspects of Apache Spark including streaming, Spark SQL, DataFrames, MLLib, etc. It lets the developers reuse all the skills, code, knowledge, and libraries. The C#/ F# language that binds to Spark will be written on a new Spark interop layer that will offer easier extensibility. .NET for Apache Spark can be used on Linux, macOS, and Windows and is compliant with .NET Standard 2.0. .NET for Apache Spark performance The first preview version of .NET for Apache Spark performs well on the popular TPC-H benchmark. This benchmark consists of a suite of business-oriented queries. .NET for Apache Spark has a better performance against Python and Scala. It is also 2 times faster than Python. What more features can be expected? In the future, the team aims to simplify the documentation and samples and work towards native integration with developer tools such as Visual Studio, Visual Studio Code, Jupyter notebooks. Developers can also expect .NET support for user-defined aggregate functions and .NET idiomatic APIs for C# and F# (e.g., using LINQ for writing queries). The team is also working towards adding support for Azure Databricks, Kubernetes, etc. and making .NET for Apache Spark part of Spark Core. Few users are excited about this news and are expecting some major improvement with .NET for Spark. A user commented on HackerNews, “I've seen the announcement about .NET interior support in Apache Spark some time ago. The benchmarks are interesting and tell the story - in few cases it is faster than Python, but slower than native (for Spark) Scala/JVM. Maybe with Arrow interchange Python's performance would increase (and for other interpose that would use Array - i.e. for .Net).” Few others are confused about the transition, as they have to get their teams shifted to the new setup. Another user commented, “Indeed the real sad part is you can’t lead teams there early (premature optimization). Everybody seems to make the same rough transition on their own.” To know more about this news, check out the post by Apache Spark. Winners for the 2019 .NET Foundation Board of Directors elections are finally declared Fedora 31 will now come with Mono 5 to offer open-source .NET support ML.NET 1.0 RC releases with support for TensorFlow models and much more!
Read more
  • 0
  • 0
  • 17251

article-image-microsoft-announces-xlookup-for-excel-users-that-fixes-most-vlookup-issues
Amrata Joshi
02 Sep 2019
3 min read
Save for later

Microsoft announces XLOOKUP for Excel users that fixes most VLOOKUP issues

Amrata Joshi
02 Sep 2019
3 min read
Last week, the team at Microsoft announced the XLOOKUP feature for Excel users, a successor to the VLOOKUP function, the first lookup function learned by Excel users. XLOOKUP feature gives Excel users an easier way of displaying information in their spreadsheets. Currently, this function is only available to Office 365 testers and the company will be making it more broadly available. XLOOKUP has the ability to look vertically as well as horizontally and it replaces HLOOKUP too.  XLOOKUP just needs 3 arguments for performing the most common exact lookup whereas VLOOKUP required 4. The official post reads, “Let’s consider its signature in the simplest form: XLOOKUP(lookup_value,lookup_array,return_array) lookup_value: What you are looking for lookup_array: Where to find it return_array: What to return”  XLOOKUP overcomes the limitations of VLOOKUP Exact match in XLOOKUP is possible VLOOKUP resulted in a default approximate match of what the user was looking for, rather than the exact match. With XLOOKUP users can now find the exact match. Data can be drawn on both sides  VLOOKUP can draw on the data that’s on the right-hand side of the reference column, so users have to rearrange their data to use the function. With XLOOKUP, users can easily draw on the data both to the left and right, and it also combines VLOOKUP and HLOOKUP into a single function. Column insertions/deletions VLOOKUP’s 3rd argument is the column number so if you insert or delete a column then you have to increment or decrement the column number inside the VLOOKUP. With XLOOKUP users can easily insert or delete columns. Search from the back is now possible With VLOOKUP, users need to reverse the order of the data for finding the last occurrence of the data but with XLOOKUP it is easy for users to search the data from the back. References cells systematically For VLOOKUP, the 2nd argument, table_array, needs to be stretched from the lookup column to the results column. It references more cells which results in unnecessary calculations, reducing the performance of your spreadsheets. XLOOKUP systematically references the cells which don’t lead to complications in calculations. In an email to CNBC, Joe McDaid, Excel’s senior program manager wrote, XLOOKUP is “more powerful than INDEX/MATCH and more approachable than VLOOKUP.” To know more about this news, check out the official post. What’s new in application development this week? Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs Qt introduces Qt for MCUs, a graphics toolkit for creating a fluid user interface on microcontrollers Twilio launched Verified By Twilio, that will show customers who is calling them and why      
Read more
  • 0
  • 0
  • 17243

article-image-microsoft-open-sources-accessibility-insights-for-web-a-chrome-extension-to-help-web-developers-fix-their-accessibility-issues
Sugandha Lahoti
14 Mar 2019
2 min read
Save for later

Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issues

Sugandha Lahoti
14 Mar 2019
2 min read
On Tuesday, Microsoft open sourced its Accessibility tools, allowing developers to easily find and fix common accessibility issues early in the development cycle. This includes Accessibility Insights for Windows and Accessibility Insights for Web, both built on Deque’s open source axe technology. You can run quick tests, easily create audits that you can export and share with others, and even file issues to GitHub. Accessibility Insights for Web Accessibility Insights for Web is basically a Chrome extension that helps developers find and fix accessibility issues in web apps and sites. The tool comes with a lightweight, two-step process called FastPass that helps developers identify common, high-impact accessibility issues. Fast Pass uses automated checks to check compliance with approximately 50 accessibility requirements. It also makes use of Tab stops to provide clear instructions and a visual helper for identifying accessibility issues related to keyboard access, such as missing tab stops, keyboard traps, and incorrect tab order. The second part of Accessibility Insights is Assessment which helps HTML developers in verifying if a web app or web site is 100% compliant with Web Content Accessibility Guidelines (WCAG) 2.0 Level AA. It also comes with Automated checks and also Manual Testing to provide step-by-step instructions, examples, and how-to-fix guidance for approximately 20 tests. Deque Systems provides GitHub issue filing for Accessibility Insights for Web, and color contrast detection heuristics for Accessibility Insights for Windows. On why Accessibility Insights is open sourced, Microsoft writes in a blog post, “We are driven by the promise of more accessible products for more people.  That’s why we’re releasing Accessibility Insights to the open source and accessibility communities – it’s all of ours now, and together we’ll continue to make it a better tool and build a more accessible future.” You can read more about Accessibility Insights on its website. It’s a win for Web accessibility as courts can now order companies to make their sites WCAG 2.0 compliant W3C and FIDO Alliance declare WebAuthn as the web standard for password-free logins Microsoft open sources the Windows Calculator code on GitHub
Read more
  • 0
  • 0
  • 17232

article-image-the-go-team-shares-new-proposals-planned-to-be-implemented-in-go-1-13-and-1-14
Bhagyashree R
27 Jun 2019
5 min read
Save for later

The Go team shares new proposals planned to be implemented in Go 1.13 and 1.14

Bhagyashree R
27 Jun 2019
5 min read
Yesterday, the Go team shared the details of what all is coming in Go 1.13, the first release that is implemented using the new proposal evaluation process. In this process, feedback is taken from the community on a small number of proposals to reach the final decision. The team also shared what proposals they have selected to implement in Go 1.14 and the next steps. At Gophercon 2017, Russ Cox, Go programming language tech lead at Google, first disclosed the plan behind the implementation of Go 2. This plan was simple: the updates will be done in increments and will have minimal to no effect on everybody else. Updates in Go 1.13 Go 1.13, which marks the first increment towards Go 2, is planned to release in early August this year. A lot of language changes have landed in this release that were shortlisted from the huge list of Go 2 proposals based on the new proposal evaluation process. These proposals are selected under the criteria that they should address a problem, have minimal disruption, and provide a clear and well-understood solution. The team selected “relatively minor and mostly uncontroversial” proposals for this version. These changes are backward-compatible as modules, Go’s new dependency management system is not the default build mode yet. Go 1.11 and Go 1.12 include preliminary support for modules that makes dependency version information explicit and easier to manage. Proposals planned to be implemented in Go 1.13 The proposals that were initially planned to be implemented in Go 1.13 were: General Unicode identifiers based on Unicode TR31: This proposes to add support for enabling programmers using non-Westen alphabets to combine characters in identifiers and export uncased identifiers. Binary integer literals and support for_ in number literals: Go comes with support for octal, hexadecimal, and standard decimal literals. However, unlike other mainstream languages like Java 7, Python 3, and Ruby, it does not have support for binary integer literals. This proposes adding support for binary integer literals as a new prefix to integer literals like 0b or 0B. Another minor update is adding support for a blank (_) as a separator in number literals to improve the readability of complex numbers. Permit signed integers as shift counts: This proposes to change the language spec such that the shift count can be a signed or unsigned integer, or any non-negative constant value that can be represented as an integer. Out of these shortlisted proposals the binary integer literals, separators for number literals, and signed integer shift counts are implemented. The general Unicode identifiers proposal was not implemented as there was no “concrete design document in place in time.“ The proposal to support binary integer literals was significantly expanded, which led to an overhauled and modernized Go’s number literal syntax. Updates in Go 1.14 After the relatively minor updates in Go 1.13, the team has plans to take it up a notch with Go 1.14. With the new major version Go 2, their overarching goal is to provide programmers improved scalability. To achieve this, the team has to tackle the three biggest hurdles: package and version management, better error handling support, and generics. The first hurdle, package and version management will be addressed by the modules feature, which is growing stronger with each release. For the other two, the team has presented draft designs at last year’s GopherCon in Denver. https://youtu.be/6wIP3rO6On8 Proposals planned to be implemented in Go 1.14 Following are the proposals that are shortlisted for Go 1.14: A built-in Go error check function, ‘try’: This proposes a new built-in function named ‘try’ for error handling. It is designed to remove the boilerplate ‘if’ statements typically associated with error handling in Go. Allow embedding overlapping interfaces: This is a backward-compatible proposal to make interface embedding more tolerant. Diagnose ‘string(int)’ conversion in ‘go vet’: This proposes to remove the explicit type conversion string(i) where ‘i’ has an integer type other than ‘rune’. The team is making this backward-incompatible change as it was introduced in the early days of Go and now has become quite confusing to comprehend. Adopt crypto principles: This proposes to implement design principles for cryptographic libraries outlined in the Cryptography Principles document. The team is now seeking community feedback on these proposals. “We are especially interested in fact-based evidence illustrating why a proposal might not work well in practice or problematic aspects we might have missed in the design. Convincing examples in support of a proposal are also very helpful,” the blog post reads. While developers are confident that Go 2 will bring a lot of exciting features and enhancements, not many are a fan of some of the proposed features, for instance, the try function. “I dislike the try implementation, one of Go's strengths for me after working with Scala is the way it promotes error handling to a first class citizen in writing code, this feels like its heading towards pushing it back to an afterthought as tends to be the case with monadic operations,” a developer commented on Hacker News. Some Twitter users also expressed their dislike towards the proposed try function: https://twitter.com/nicolasparada_/status/1144005409755357186 https://twitter.com/dullboy/status/1143934750702362624 These were some of the updates proposed for Go 1.13 and Go 1.14. To know more about this news, check out the Go Blog. Go 1.12 released with support for TLS 1.3, module support among other updates Go 1.12 Release Candidate 1 is here with improved runtime, assembler, ports and more State of Go February 2019 – Golang developments report for this month released  
Read more
  • 0
  • 0
  • 17231

article-image-apache-software-foundation-finally-joins-the-github-open-source-community
Amrata Joshi
30 Apr 2019
3 min read
Save for later

Apache Software Foundation finally joins the GitHub open source community

Amrata Joshi
30 Apr 2019
3 min read
In 2016, Apache decided to start integrating GitHub’s repository and tooling with their own services. After working on the integration over the years, they made a move towards simplifying how they work and move all Git projects to GitHub. By February, this year, Apache completed the migration to GitHub and enabled all the projects with a simple platform to host and review code, collaborate on projects, and build software alongside developers around the world. Greg Stein, ASF Infrastructure Administrator, said, "In 2016, the Foundation started integrating GitHub's repository and tooling, with our own services. This enabled selected projects to use GitHub's excellent tools. Over time, we improved, debugged, and solidified this integration. In late 2018, we asked all projects to move away from our internal git service, to that provided by GitHub. This shift brought all of their tooling to our projects, while we maintain a backup mirror on our infrastructure." Yesterday, Apache Software Foundation (ASF) finally joined the GitHub open source community. Apache Software Foundation has about 200M+ lines of code that are managed by an all-volunteer community of 730 individuals. Nat Friedman, Chief Executive Officer of GitHub, said on this announcement, "We're proud to have such a long standing member of the Open Source community migrate to GitHub. Whether we're working with individual Open Source maintainers and contributors or some of the world's largest Open Source foundations like Apache, GitHub's mission is to be the home for all developers by supporting Open Source communities, addressing their unique needs, and helping Open Source projects thrive." Initially, Apache projects had two version control services, Apache Subversion and Git. As the number of projects increased, ASF communities wanted to see their source code available on GitHub because the codes were read-only mirrors. Also, the ability to use the GitHub tools around those repositories was very limited. This made Apache take the decision to join GitHub. Greg Stein further added, "We continue to experiment and expand the set of services that GitHub can provide to our communities, given our own needs and requirements. The Foundation has started working closely with GitHub management to explore ways to make this happen, and what will be possible in the future." Many users think that the reason for Apache to migrate to GitHub was the increasing cost of managing the code and infrastructure. A user commented on HackerNews, “Apparently, one of the big motivating reasons for this was "cost". The foundation’s 2018 five-year strategic plan noted that infrastructure services account for more than 80 percent of the total ASF expense budget, adding: Increasingly, project communities have infrastructure requirements that strain the capabilities of the ASF. The report noted that, given burgeoning costs, encouraging the use of more externally provided services was its best option.” Another comment reads, “Holy shit, they're spending $800k a year on infrastructure! Honestly, it's difficult to understand why they haven't sooner moved to GitHub, or even GitLab or the like - it feels reckless. That money could be put to far greater use - as an Apache supporter who hasn't ever felt the need to look at their costs, I have to say that I'm very disappointed.” To know more about this news, check out Apache’s blog post. Microsoft and GitHub employees come together to stand with the 996.ICU repository ‘Developers’ lives matter’: Chinese developers protest over the “996 work schedule” on GitHub GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch    
Read more
  • 0
  • 0
  • 17223

article-image-future-of-eslint-support-in-typescript
Prasad Ramesh
21 Jan 2019
2 min read
Save for later

Future of ESLint support in TypeScript

Prasad Ramesh
21 Jan 2019
2 min read
In a blog post, the ESLint team talks about the future of ESLint on TypeScript. Earlier, the TypeScript team talked about their future and including ESLint into their repository to improve compatibility between the two. Based on the feedback from the TypeScript community, it was discovered that the linting experience was not that good in TypeScript. They then announced support for both ESLint and TSLint. The former worked well with TypeScript while TSLint would cause duplicate work and induce some breaking changes. Also, some lint rules were not present in TSLint. Hence the focus has been on incorporating ESLint. Many members from the ESLint team have been working to improve its compatibility with TypeScript. The focus of this earlier work was on the TypeScript parser and typescript-eslint-parser. Other than that there were efforts also on eslint-plugin-typescript which was maintained by individual team members. The Typescript parser will become an important part in the integration of the two and the ESLint teams want to ensure its proper maintenance. The typescript-eslint project A key contributor working on ESLint compatibility in TypeScript, James Henry started the typescript-eslint project as a centralized repository. It contains all things pertaining to TypeScript ESLint compatibility and will be housing TypeScript parser, eslint-plugin-typescript, and other utilities that aid in the TypeScript ESLint integration. However, the ESLint team itself won’t be formally a part of this project but seem to be supportive of Henry’s efforts. ESLint’s future in TypeScript The ESLint official team will no longer maintain the typescript-eslint-parser. The repository is now archived and there will be no future released of typescript-eslint-parser in npm. Users who are using typescript-eslint-parser are advised to switch to @typescript-eslint/parser. The typescript-eslint repository will be updated for any new developments on ESLint support in TypeScript. Announcing ‘TypeScript Roadmap’ for January 2019- June 2019 TypeScript 3.2 released with configuration inheritance and more Introducing ReX.js v1.0.0 a companion library for RegEx written in TypeScript
Read more
  • 0
  • 0
  • 17218
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-libc-9-releases-with-explicit-support-for-webassembly-system-interface-wasi
Sugandha Lahoti
14 Oct 2019
2 min read
Save for later

Libc++ 9 releases with explicit support for WebAssembly System Interface (WASI)

Sugandha Lahoti
14 Oct 2019
2 min read
On Friday, Libc++ version 9 was released; libc++ is an implementation of the C++ standard library, targeting C++11, C++14 and above. Libc++ 9 is a part of the LLVM Compiler Infrastructure, release 9.0.0 which was made available in September. Libc++ 9 adds explicit support for WebAssembly System Interface (WASI) along with major improvements from the previous release and new feature work. Libc++ has also dropped support for GCC 4.9; they now support GCC 5.1 and above. WASI is a system interface for the WebAssembly platform. Currently, it supports sandboxed access to the filesystem via a POSIX-like API, as well as other basic interfaces like argv, environment variables, random numbers, and timers. There are three popular implementations of WASI: wasmtime, Mozilla’s WebAssembly runtime, Lucet, Fastly’s WebAssembly runtime, and a browser polyfill. Improvements in Libc ++ 9 Minor fixes to std::chrono operators. libc++ now correctly handles Objective-C++ ARC qualifiers in std::is_pointer. Front and back methods are added to std::span std::to_chars now adds leading zeros. Ensure std::tuple is trivially constructible. std::aligned_union now works in C++03. Output of nullptr to std::basic_ostream is formatted properly. P0608 is now implemented as a sane variant converting constructor. std::is_unbounded_array and std::is_bounded_array added to type traits. std::atomic now includes many new features and specialization Added std::midpoint and std::lerp math functions and std::is_constant_evaluated function Erase-like algorithms now return size type. Added contains method to container types. std::swap is now a constant expression. std::move and std::forward now both work in C++03 mode. People on Twitter were quite happy with WASI support in libc ++ https://twitter.com/Stephen_d2005/status/1178489876070535168 https://twitter.com/iwillrunoutofsp/status/1182702301062008832 You can also see the release notes for additional information. Introducing Weld, a runtime written in Rust and LLVM for cross-library optimizations LLVM 9 releases with official RISC-V target support, asm goto, Clang 9, and more LLVM’s Clang 9.0 to ship with experimental support for OpenCL C++17, asm goto initial support and more. LLVMs Arm stack protection feature turns ineffective when the stack is re-allocated
Read more
  • 0
  • 0
  • 17184

article-image-gitlab-faces-backlash-from-users-over-performance-degradation-issues-tied-to-redis-latency
Vincy Davis
02 Jul 2019
4 min read
Save for later

GitLab faces backlash from users over performance degradation issues tied to redis latency

Vincy Davis
02 Jul 2019
4 min read
Yesterday, GitLab suffered major performance degradation in terms of 5x increased error rate and site slow down. The degradation was identified and rectified within few hours of its discovery. https://twitter.com/gabrielchuan/status/1145711954457088001 https://twitter.com/lordapo_/status/1145737533093027840 The GitLab engineers promptly started investigating the slowdown on GitLab.com and notified users that the slow down is in redis and lru cluster, thus impacting all web requests serviced by the rails front-end. What followed next was a very comprehensive detailing about the issue, its causes, who’s handling what kind of issue and more. GitLab’s step by step response looked like this: First, they investigated slow response times on GitLab. Next, they added more workers to alleviate the symptoms of the incident. Then, they investigated jobs on shared runners that were being picked up at a low rate or appeared being stuck. Next, they tracked CI issues and observed performance degradation as one incident. Over the time, they continued to investigate the degraded performance and CI pipeline delays. After a few hours, all services restored to normal operation and the CI pipelines continued to catch up from delays earlier with nearly normal levels. David Smith, the Production Engineering Manager at GitLab also updated users that the performance degradation was due to few issues tied to redis latency. Smith also added that, “We have been looking into the details of all of the network activity on redis and a few improvements are being worked on. GitLab.com has mostly recovered.” Many users on Hacker News wrote about their unpleasant experience with GitLab.com. A user states that, “I recently started a new position at a company that is using Gitlab. In the last month I've seen a lot of degraded performance and service outages (especially in Gitlab CI). If anyone at Gitlab is reading this - please, please slow down on chasing new markets + features and just make the stuff you already have work properly, and fill in the missing pieces.” Another user comments, “Slow down, simplify things, and improve your user experience. Gitlab already has enough features to be competitive for a while, with the Github + marketplace model.” Later, a GitLab employee by the username, kennyGitLab commented that GitLab is not losing sight and is just following the company’s new strategy of ‘Breadth over depth’. He further added that, “We believe that the company plowing ahead of other contributors is more valuable in the long run. It encourages others to contribute to the polish while we validate a future direction. As open-source software we want everyone to contribute to the ongoing improvement of GitLab.” Users were indignant by this response. A user commented, “"We're Open Source!" isn't a valid defense when you have paying customers. That pitch sounds great for your VCs, but for someone who spends a portion of their budget on your cloud services - I'm appalled. Gitlab is a SaaS company who also provides an open source set of software. If you don't want to invest in supporting up time - then don't sell paid SaaS services.” Another comment read, “I think I understand the perspective, but the messaging sounds a bit like, ‘Pay us full price while serving as our beta tester; sacrifice the needs of your company so you can fulfill the needs of ours’.” Few users also praised GitLab for prompt action and for providing everybody with in-depth detailing about the investigation. A user wrote that, “This is EXACTLY what I want to see when there's a service disruption. A live, in-depth view of who is doing what, any new leads on the issue, multiple teams chiming in with various diagnostic stats, honestly it's really awesome. I know this can't be expected from most businesses, especially non-open sourced ones, but it's so refreshing to see this instead of the typical "We're working on a potential service disruption" that we normally get.” GitLab goes multicloud using Crossplane with kubectl Atlassian Bitbucket, GitHub, and GitLab take collective steps against the Git ransomware attack Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note
Read more
  • 0
  • 0
  • 17116

article-image-reactos-version-0-4-9-released-with-self-hosting-and-fastfat-crash-fixes
Sugandha Lahoti
24 Jul 2018
3 min read
Save for later

ReactOS version 0.4.9 released with Self-hosting and FastFAT crash fixes

Sugandha Lahoti
24 Jul 2018
3 min read
ReactOS, the free and open source “ Windows-like” operating system has been released as a new version. ReactOS 0.4.9 comes with system stability and general consistency improvements such as Self-hosting, shell improvements, FastFAT crash fixes and more. As they target a newer release every three months, more focus is given on improvements with fewer headliner changes. ReactOS is now capable of Self-Hosting Self-hosting is a process of building an OS on an OS. Self-hosting although considered to be a milestone in any OS’ maturity, is associated with many challenges of its own. First, compiling any large codebase requires high memory usage and storage I/O stressing the operating system. Scheduling is also stressed, as modern build systems in general attempt to produce multiple compilation processes to speed up the build process. ReactOS featured self-hosting in an older version. However, changes brought by subsequent releases, such as the reworking of the kernel, made this self-hosting process non-existent. However, with the recent changes made to the filesystem, Self-hosting is now completely established in the 0.4.9 release. The open source FreeBSD project’s implementation of qsort played a major role in achieving this. Stability brought in by fixing FastFAT crashes ReactOS had significant resource leakages caused by the FastFAT driver. This leakage was eating up the common cache to the point where attempts to copy large files would result in a crash. The new version fixes the FastFAT driver’s behavior by adding in write throttling support and restraining its usage of the cache. A conservative usage of the cache may slow the system a bit during IO operations. However, it ensures that resources remain available to service for large IO operations instead of crashing like before. FastFAT driver also featured a complete rewrite of the support for dirty volumes greatly reducing the chance of file corruptions. This will protect the system from becoming unusable after a crash. Shell Improvements & Features Shell has also received several upgrades. It now has a built-in zipfldr (Zip Folder) extension. With this, ReactOS can also uncompress zipped files without needing to install third-party tools to accomplish it. It also allows users to now choose whether to move, copy, or link a file or folder when they drag it with the right mouse button. Some other new improvements A new mouse properties dialog in the GUI component of the ReactOS installer The inclusion of RAPPS, the gateway program used for getting various applications installed on ReactOS. With this Unicode support, ReactOS can now easily support many different languages. ReactOS can now present itself as Windows 8.1 with the Version APIs. These are just a select few major updates. For a full list of features, upgrades, and improvements, read the changelog. Microsoft releases Windows 10 SDK Preview Build 17115 with Machine Learning APIs Microsoft releases Windows 10 Insider build 17682! What’s new in the Windows 10 SDK Preview Build 17704
Read more
  • 0
  • 0
  • 17109

article-image-python-3-8-alpha-2-is-now-available-for-testing
Natasha Mathur
27 Feb 2019
2 min read
Save for later

Python 3.8 alpha 2 is now available for testing

Natasha Mathur
27 Feb 2019
2 min read
After releasing Python 3.8.0 alpha 1 earlier this month, Python team released the second alpha version of the four planned alpha releases of Python 3.8, called Python 3.8.0a2, last week. Alpha releases make it easier for the developers to test the current state of new features, bug fixes, and the release process. Python team states that many new features for Python 3.8 are still being planned and written. Here is a list of some of the major new features and changes so far, however, these features are currently raw and not meant for production use: PEP 572 i.e. Assignment expressions have been accepted. Now, users can assign to variables within an expression with the help of the notation NAME := expr. A new exception, TargetScopeError has also been added with one change to the evaluation order. Typed_ast, a fork of the ast module (in C) used by mypy, pytype, and (IIRC) has been merged back to CPython. Typed_ast helps preserve certain comments. Multiprocessing is now allowed and users can use shared memory segments to avoid pickling costs and the need for serialization between processes. The next pre-release for Python 3.8 will be Python 3.8.0a3 and has been scheduled for 25th March 2019. For more information, check out the official Python 3.8.0a2 announcement. PyPy 7.0 released for Python 2.7, 3.5, and 3.6 alpha 5 blog posts that could make you a better Python programmer Python Software foundation and JetBrains’ Python Developers Survey 2018
Read more
  • 0
  • 0
  • 17087
article-image-tensorflow-2-0-to-be-released-soon-with-eager-execution-removal-of-redundant-apis-tf-function-and-more
Amrata Joshi
15 Jan 2019
3 min read
Save for later

TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tf function and more

Amrata Joshi
15 Jan 2019
3 min read
Just two months ago Google’s TensorFlow, one of the most popular machine learning platforms celebrated its third birthday. Last year in August, Martin Wicke, engineer at Google, posted the list of what’s expected in TensorFlow 2.0, an open source machine learning framework, on the Google group. The key features listed by him include: This release will come with eager execution. This release will feature more platforms and languages along with improved compatibility. The deprecated APIs will be removed. Duplications will be reduced. https://twitter.com/aureliengeron/status/1030091835098771457 The early preview of TensorFlow 2.0 is expected soon. TensorFlow 2.0 is expected to come with high-level APIs, robust model deployment, powerful experimentation for research and simplified API. Easy model building with Keras This release will come with Keras, a user-friendly API standard for machine learning which will be used for building and training the models. As Keras provides various model-building APIs including sequential, functional, and subclassing, it becomes easier for users to choose the right level of abstraction for their project. Eager execution and tf.function TensorFlow 2.0 will also feature eager execution, which will be used for immediate iteration and debugging. The tf.function will easily translate the Python programs into TensorFlow graphs. The performance optimizations will remain optimum and by adding the flexibility, tf.function will ease the use of expressing programs in simple Python. Further, the tf.data will be used for building scalable input pipelines. Transfer learning with TensorFlow Hub The team at TensorFlow has made it much easier for those who are not into building a model from scratch. Users will soon get a chance to use models from TensorFlow Hub, a library for reusable parts of machine learning models to train a Keras or Estimator model. API Cleanup Many APIs are removed in this release, some of which are tf.app, tf.flags, and tf.logging. The main tf.* namespace will be cleaned by moving lesser used functions into sub packages such as tf.math. Few APIs have been replaced with their 2.0 equivalents like tf.keras.metrics, tf.summary, and tf.keras.optimizers. The v2 upgrade script can be used to automatically apply these renames. Major Improvements The queue runners will be removed in this release The graph collections will also get removed. The APIs will be renamed in this release for better usability. For example,  name_scope can be accessed using  tf.name_scope or tf.keras.backend.name_scope. For ease in migration to TensorFlow 2.0, the team at TensorFlow will provide a conversion tool for updating TensorFlow 1.x Python code for using TensorFlow 2.0 compatible APIs. It will flag the cases where code cannot be converted automatically. In this release, the stored GraphDefs or SavedModels will be backward compatible. With this release, the distribution to tf.contrib will no more be in use. Some of the existing contrib modules will be integrated into the core project or will be moved to a separate repository, rest of them will be removed. To know about this news, check out the post by the TensorFlow team on Medium. Building your own Snapchat-like AR filter on Android using TensorFlow Lite [ Tutorial ] Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP Google researchers introduce JAX: A TensorFlow-like framework for generating high-performance code from Python and NumPy machine learning programs
Read more
  • 0
  • 0
  • 17060

article-image-thanks-deepcode-ai-can-help-you-write-cleaner-code
Richard Gall
30 Apr 2018
2 min read
Save for later

Thanks to DeepCode, AI can help you write cleaner code

Richard Gall
30 Apr 2018
2 min read
DeepCode is a tool that uses artificial intelligence to help software engineers write cleaner code. It's a bit like Grammarly or the Hemingway Editor, but for code. It works in an ingenious way. Using AI, it reads your GitHub repositories and highlights anything that might be broken or cause compatibility issues. It is currently only available for Java, JavaScript, and Python, but more languages are going to be added. DeepCode is more than a debugger Sure, DeepCode might sound a little like a glorified debugger. But it's important to understand it's much more than that. It doesn't just correct errors, it can actually help you to improve the code you write. That means the project's mission isn't just code that works, but code that works better. It's thanks to AI that DeepCode is able to support code performance too - the software learns 'rules' about how code works best. And because DeepCode is an AI system, it's only going to get better as it learns more. Speaking to TechCrunch, Boris Paskalev claimed that DeepCode has more than 250,000 rules. This is "growing daily." Paskalev went on to explain: "We built a platform that understands the intent of the code... We autonomously understand millions of repositories and note the changes developers are making. Then we train our AI engine with those changes and can provide unique suggestions to every single line of code analyzed by our platform.” DeepCode is a compelling prospect for developers. As applications become more complex, and efficiency becomes increasingly more important, a simple solution to unlocking greater performance could be invaluable. It's no surprise that it has already raised 1.1 milion in investment from VC company btov. It's only going to become more popular with investors as the popularity of the platform grows. This might mean the end of spaghetti code, which can only be a good thing. Find out more about DeepCode and it's pricing here. Read more: Active Learning: An approach to training machine learning models efficiently
Read more
  • 0
  • 0
  • 17048

article-image-10-years-of-github
Richard Gall
12 Apr 2018
3 min read
Save for later

10 years of GitHub

Richard Gall
12 Apr 2018
3 min read
GitHub celebrated its tenth birthday on Tuesday. Since its launch in April 2008, the version control platform has come to define the way we build software. It's difficult to see open source software culture evolving to the extent it has without GitHub. Its impact can be felt even beyond software. It has changed the ways users experience the web, and it has made artificial intelligence more pervasive than ever. For that reason, we should all pay tribute to GitHub - developers and non-developers alike. You can find Packt on GitHub here. We have more than 1,400 code repositories for our products that you can use as you learn. 27 million developers have 'contributed' to GitHub's success GitHub now has more than 27 million developers on its platform and is home to more than 80 million projects. To say thank you to everyone who has been a part of the last 10 years - everyone who has quite literally contributed to its success - the team put together this short video: https://www.youtube.com/watch?v=hQXV70Z4cFI Key GitHub milestones Let's take a look at some of the most important milestones in GitHub's first decade. Find out more here. April 12, 2008 - GitHub officially launches. Read the post from 2008 to take a trip back in time... May 21, 2009 - Node.js launches and its saga begins. From io.js forking in 2014, to reunification a year later, the JavaScript library is today one of the most popular tools on GitHub. There are almost 2,000 contributors to Node.js Core, the central Node.js project. January 1, 2012 - JavaScript begins 2012 as the most popular language on GitHub. More than 6 years later that still remains the case. January 16, 2013 - GitHub reaches 3 million users. In the space of just 5 year,s the platform was truly embedded in the software landscape. October 23, 2014 - Microsoft takes the significant step of making .NET open source. This was the start of a cultural shift at Microsoft that's still happening today. Perhaps more than any other moment in the history of open source, this underlined the fact that it was no longer a niche stream of the software landscape. It had become part of the mainstream. March 2, 2015 - Unreal Engine 4 makes its source code available for free. This gave game developers access to an incredibly powerful tool for no cost whatsoever. The impact on the growth of game development is important to note. 'Games' was one of the most popular topics on the platform in 2017. December 3, 2015 - Mirroring the move made by Microsoft in 2014, Apple makes Swift open source. Again, this was a huge tech company - with a similar reputation for isolationism - embracing open source. February 15, 2017 - The launch of TensorFlow 1.0 marks the start of the boom in artificial intelligence. Or more specifically, it marks a point at which artificial intelligence and machine learning became accessible to millions more people than ever before. The range of projects that TensorFlow has been a part of is astonishing - from research to medicine to marketing, its accessibility has had an impact on the world in ways that many people don't realise. May 31, 2017 - GitHub's 100 millionth pull request is merged. Thank you, GitHub for 10 years supporting software developers. It really wouldn't have been the same without you. Here's to another 10 years... Is Comet the new Github for Artificial Intelligence? Mine Popular Trends on GitHub using Python – Part 1 Github’s plan for coding automation, TensorFlow releases Tensorflow Lattice – 11th Oct.’ 17 Headlines Collaboration Using the GitHub Workflow Using Gerrit with GitHub
Read more
  • 0
  • 0
  • 17027
article-image-swift-diagnostic-architecture-improvements-part-of-swift-5-2
Sugandha Lahoti
18 Oct 2019
3 min read
Save for later

Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release

Sugandha Lahoti
18 Oct 2019
3 min read
The team behind Swift shared a new blog post yesterday detailing their new diagnostic architecture which is being implemented as a part of the upcoming Swift 5.2 release (expected early November). This new architecture aims to improve the diagnostics from the compiler. It will make it easier to improve/port existing diagnostics to be used by new feature implementors. The Swift compiler uses type checker to ensure the correctness of a program. Type checker enforces rules about how types are used in source code and show when those rules are violated. However, it guesses the exact location of an error which is not helpful in certain scenarios because it is not specific or actionable. With the new diagnostic infrastructure, the team is trying to implement a new type checker that attempts to “fix” problems right at the point where they occur while remembering the fixes it has applied. This way, type checker can pinpoint errors in more kinds of programs. New constraint fix resolves inconsistent situations The type checker converts the source code into a constraint system, which represents relationships among the types in the code. The Constraint System first generates the constraint and then solves them. The process of constraint generation produces a constraint system that relates the types of various subexpressions within an expression. The constraint solver takes a given set of constraints and determines the most specific type binding for each of the type variables in the constraint system. To improve the constraint solving mechanism, the new diagnostic infrastructure employs a constraint fix that tries to resolve inconsistent situations where the solver gets stuck with no other types to attempt. This fix captures all useful information about the error location from the solver and uses that later for diagnostics. While the former approach guesses where the error is located, the new approach has a symbiotic relationship with the solver which provides all of the error locations to it. How constraint fix works A constraint fix is created whenever a constraint failure is detected. The constraint fix captures three crucial pieces of information about a failure: kind of failure that occurred, location in the source code where the failure came from, and types and declarations involved in the failure. The constraint solver then accumulates these fixes. Once it arrives at a solution, it looks at the fixes that were part of a solution and produces actionable errors or warnings. The Swift team has shared examples of improved diagnostics and SwiftUI examples to better demonstrate the workings of this new diagnostic infrastructure. People are quite excited about these error improvements in Swift and shared their views on the associated thread in Swift Forums. “Omg better error messages! That's so awesome! Right now error messages are the worst part of swift, I'm excited. I hope it will be the end of types called _ in error messages” “Awesome work @xedin! The blog post is highly informative and I thoroughly enjoyed reading it. I very much look forward to seeing the new diagnostic architecture in action.” Developers from the Swift for TensorFlow project propose adding first-class differentiable programming to Swift Apple releases the native SwiftUI framework with declarative syntax, live editing, and support of Xcode and more. Swift is improving the UI of its generics model with the “reverse generics” system
Read more
  • 0
  • 0
  • 16987

article-image-rust-1-38-releases-with-pipelined-compilation-for-better-parallelism-while-building-a-multi-crate-project
Bhagyashree R
27 Sep 2019
3 min read
Save for later

Rust 1.38 releases with pipelined compilation for better parallelism while building a multi-crate project

Bhagyashree R
27 Sep 2019
3 min read
After releasing Rust 1.37 last month, the Rust team announced the release of Rust 1.38 yesterday. This version supports pipelined rustc compilation, an extended #[deprecated] attribute for macros, the std::any::type_name function to know the type name, and more. Key updates in Rust 1.38 Pipelined compilation to increase parallelism Rust’s dependency manager and project compiler, Cargo create a directed acyclic graph  (DAG) of crates whenever a cargo build command is fired. Cargo waits till all of the dependencies for that compilation is completed, only then it proceeds to execute rustc.  Starting with 1.38, this wait is minimized by introducing pipelined compilation. Cargo will now take advantage of the metadata produced by the compiler to start the next compilation. Read also: Rust 1.29 is out with improvements to its package manager, Cargo The team shared in the announcement that though this update doesn’t have much effect on builds for a single crate, it has shown up to 10-20% improvement in compilation speed during testing. “Other ones did not improve much, and the speedup depends on the hardware running the build, so your mileage might vary. No code changes are needed to benefit from this,” the team adds. Linting incorrect uses of mem::{uninitialized, zeroed} Previously, the ‘mem::uninitialized’ function allowed developers to sidestep Rust’s initialization checks. This operation can be “incredibly dangerous” as it makes the Rust compiler assume that values are properly initialized. This was addressed in Rust 1.36 by stabilizing the ‘MaybeUninit<T>’ type. The Rust team explained in a previous announcement, “The Rust compiler will understand that it should not assume that a MaybeUninit<T> is a properly initialized T. Therefore, you can do gradual initialization more safely and eventually use .assume_init() once you are certain that maybe_t: MaybeUninit<T> contains an initialized T.” The ‘mem::uninitialized’ function is planned to deprecate in Rust 1.39. Starting with Rust 1.38, the compiler has a few checks to identify incorrect initializations using ‘mem::uninitialized’ or ‘mem::zeroed’. However, these checks do not cover all cases of unsound use of these methods. The #[deprecated] attribute for macros Rust 1.9 introduced the  #[deprecated] attribute that allows crate authors to notify their users an item of their crate is deprecated and will be removed in a future release. In Rust 1.38, this attribute can also be applied to indicate the deprecation of macros. The std::any::type_name function Rust 1.38 introduces a new function called ‘std::any::type_name’ that gives you the type name. Since this is a standard library function for debugging, the exact content and format of the string are not guaranteed. The team explains, “The value returned is only a best-effort description of the type; multiple types may share the same type_name value, and the value may change in future compiler releases.” Some users have already upgraded to Rust 1.38 and shared their feedback on Twitter. https://twitter.com/gilescope/status/1177229413320134661 While others are eagerly waiting for the release that will have the stabilized async-await. https://twitter.com/shirshak55/status/1177247635285000192 These were some of the updates in Rust 1.38. Check out the official announcement for more. Introducing Weld, a runtime written in Rust and LLVM for cross-library optimizations Mozilla introduces Neqo, Rust implementation for QUIC, new http protocol Introducing Nushell: A Rust-based shell Oracle releases JDK 13 with switch expressions and text blocks preview features, and more! Darklang available in private beta
Read more
  • 0
  • 0
  • 16967
Modal Close icon
Modal Close icon