Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Web Development

354 Articles
article-image-what-to-expect-in-webpack-5
Bhagyashree R
07 Feb 2019
3 min read
Save for later

What to expect in Webpack 5?

Bhagyashree R
07 Feb 2019
3 min read
Yesterday, the team behind Webpack shared all the updates we will see in its upcoming version, Webpack 5. This version improves build performance with persistent caching, introduces a new named chunk id algorithm, and more. For Webpack 5, the minimum supported Node.js version has been updated from 6 to 8. As this version is a major release, it will come with breaking changes and users may expect some plugin to not work. Expected features in Webpack 5 Removed Webpack 4 deprecated features All the features that were deprecated in Webpack 4 have been removed in this version. So, when migrating to Webpack 5 ensure that your Webpack build doesn’t show any deprecation warnings. Additionally, the team has also removed IgnorePlugin and BannerPlugin that must now be passed an options object. Automatic Node.js polyfills removed All the versions before Webpack 4 provided polyfills for most of the Node.js core modules. These were automatically applied once a module uses any of the core modules. Using polyfills makes it easy to use modules written for Node.js, but this also increases the bundle size as huge modules get added to the bundle. To stop this, Webpack 5 removes this automatically polyfilling and focuses on frontend compatible modules. Algorithm for deterministic chunk and module IDs Webpack 5 comes with new algorithms for long term caching. These are enabled by default in production mode with the following configuration lines: chunkIds: "deterministic”, moduleIds: “deterministic" These algorithms assign short numeric IDs to modules and chunks in a deterministic way. It is recommended that you use the default values for chunkIds and moduleIds. You can also choose to use the old defaults chunkIds: "size", moduleIds: "size", which will generate smaller bundles, but invalidate them more often for caching. Named Chunk IDs algorithm A named chunk id algorithm is introduced, which is enabled by default in development mode. It gives chunks and filenames human-readable names instead of the old numeric names. The algorithm determines the chunk ID the chunk’s content. So, users no longer need to use import(/* webpackChunkName: "name" */ "module") for debugging.To opt-out of this feature, you can change the configuration as chunkIds: “natural”. Compiler idle and close Starting from Webpack 5, compilers need to be closed after the use. Now, compilers enter and leave an idle state and have hooks for these states. Once compile is closed, all the remaining work should be finished as fast as possible. Then, a callback will signal that the closing has been completed. You can read the entire changelog from the Webpack repository. Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more! How to create a desktop application with Electron [Tutorial] The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0
Read more
  • 0
  • 0
  • 19124

article-image-the-openjs-foundation-accepts-nvm-as-its-first-new-incubating-project-since-the-node-js-foundation-and-jsf-merger
Bhagyashree R
04 Oct 2019
2 min read
Save for later

The OpenJS Foundation accepts NVM as its first new incubating project since the Node.js Foundation and JSF merger

Bhagyashree R
04 Oct 2019
2 min read
Yesterday, the OpenJS Foundation announced that Node Version Manager (NVM) is joining the organization as an incubating project. It is the first new project to enter the OpenJS Foundation’s incubation process since the Node.js Foundation and JSF merger. The merger happened in March this year for accelerating the development of JavaScript, combined governance structure, and more. “nvm is joining the OpenJS Foundation as an incubating project, and upon successful completion of onboarding, it will become an “At-Large” project. An “At -Large” project is one which is “stable projects with minimal needs,” the announcement reads. Node Version Manager (NVM) and its functions NVM is a tool that allows programmers to seamlessly switch between different versions of Node.js. It comes in handy when you are working on different Node.js projects or want to check your library for maximum backward compatibility. It is a POSIX-compliant bash script and supports multiple types of shells including Sh, Zsh, Dash, Ksh, except Fish. NVM also makes installing node a very easy process by handling the compilation for systems that don’t have prebuilt binaries available. You can install multiple versions of node in a single system, each with its own node_modules directory for global package installs. Since NVM stores globally installed modules inside the user directory, it removes the need for sudo when used with npm. NVM is an important part of the Node.js and JavaScript ecosystem. Joining the OpenJS Foundation will help in its further development, stability, and governance. “By joining the OpenJS Foundation, there are multiple organizational and infrastructure areas that will be better supported, helping both current users and future users including ensuring no single point of failure for the nvm.sh domain, GitHub repo, and more,” OpenJS Foundation wrote in the announcement. Check out the official announcement by the OpenJS Foundation to know more in detail. Node.js and JS Foundation announce intent to merge; developers have mixed feelings 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Electron 5.0 ships with new versions of Chromium, V8, and Node.js Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more
Read more
  • 0
  • 0
  • 18853

article-image-firefox-69-allows-default-blocking-of-third-party-tracking-cookies-and-cryptomining-for-all-users
Bhagyashree R
05 Sep 2019
6 min read
Save for later

Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users

Bhagyashree R
05 Sep 2019
6 min read
On Tuesday, Mozilla announced the release of Firefox 69. This release comes with default blocking of third-party tracking cookies and cryptomining, for all users. The team has also worked on a patch to minimize power consumption by Firefox Nightly for macOS users, which will possibly land in Firefox 70. In another announcement, Mozilla shared its plans for implementing Chrome’s Manifest V3 changes. Key updates in Firefox 69 Enhanced Tracking Protection on by default for all Browser cookies are used to store your login state, website preferences, provide personalized content, and more. However, they also facilitate third-party tracking. In addition to being a threat to user privacy, they can also end up slowing down your browser, consuming your data, and creating user profiles. The tracked information and profiles can also be sold and used for purposes that you did not consent for. With the aim to prevent this, the Firefox team came up with the Enhanced Tracking Protection feature. In June this year, they made it available to new users by default. With Firefox 69, it is now on by default and set to the ‘Standard’ setting for all users. It blocks all known third-party tracking cookies that are listed by Disconnect. Protection against cryptomining and browser fingerprinting There are many other ways through which users are tracked or their resources are used without their consent. Unauthorized cryptominers run scripts to generate cryptocurrency that requires a lot of computing power. This can end up slowing down your computers and also drain your battery. There are also fingerprinting scripts that store a snapshot of your computer’s configuration when you visit a website, which can be used to track your activities across the web. To address these, the team introduced an option to block cryptominers and browser fingerprinting in  Firefox Nightly 68 and Beta 67. Firefox 69 includes the option to block cryptominers in the “Standard Mode”, which means it is on by default. To block fingerprinting users need to turn on the “Strict Mode.” We can expect the team to make it enabled by default in a future release. Read also: Mozilla adds protection against fingerprinting and Cryptomining scripts in Firefox Nightly and Beta A stricter Block Autoplay feature Starting with Firefox 69, the Block Autoplay will block all media with sound from playing automatically by default. This means that users will be able to block any video from autoplaying, not just those that autoplay with sound. Updates for Windows 10 users Firefox 69 brings support for the Web Authentication HMAC Secret extension via Windows Hello for Windows 10 users. The HMAC Secret extension will allow users to sign-in to their device even when it is offline or in airplane mode. This release also comes with Windows hints to appropriately set content process priority levels and a shortcut on the Win10 taskbar to help users easily find and launch Firefox. Improved macOS battery life Firefox 69 comes with improved battery life and download UI. To minimize battery consumption, Firefox will switch back to the low-power GPU on macOS systems that have a dual graphics card. Other updates include JIT support for ARM64 and Finder now shows download progress for files being downloaded. Not only main releases, but the team is also putting efforts into making Firefox Nightly more power-efficient. On Monday, Henrik Skupin, a senior test engineer at Mozilla, shared that there is about 3X decrease in power usage by Firefox Nightly on macOS. We can expect this change to possibly land in version 70, which is scheduled for October 22. https://twitter.com/whimboo/status/1168437524357898240 Updates for developers Debugger updates: With this release, debugging an application that has event handlers is easier. The debugger now includes the ability to automatically break when the code hits an event handler. Also, developers can now save the scripts shown in the debugger's source list pane via the Download file context menu option. The Resize Observer API: Firefox 69 supports the Resize Observer API by default. This API provides a way to monitor any changes to an element’s size. It also notifies the observer each time when the size changes. Network panel updates: The network panel will now show the resources that got blocked because of CSP or Mixed Content. This will “allow developers to best understand the impact of content blocking and ad blocking extensions given our ongoing expansion of Enhanced Tracking Protection to all users with this release,” the team writes. Re-designed about:debugging: In Firefox 69, the team has now migrated remote debugging from the old WebIDE into a re-designed about:debugging. Check out the official release notes to know what else has landed in Firefox 69. Mozilla on Google’s Manifest V3 Chrome is proposing various changes to its extension platform called Manifest V3. In a blog post shared on Tuesday, Mozilla talked about its plans for implementing these changes and how it will affect extension developers. One of the significant updates proposed in Manifest V3 is the deprecation of the blocking webRequest API, which allows extensions to intercept all inbound and outbound traffic from the browser. It then blocks, redirects, or modifies the intercepted traffic. In place of this API, Chrome is planning to introduce declrativeNetRequest API, which limits the blocking version of the webRequest API. According to Manifest V3, the declarativeNetRequest API will be treated as the primary content-blocking API in extensions. Read also: Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument Explaining the impact of this proposed change if implemented, Mozilla wrote, “This API impacts the capabilities of content blocking extensions by limiting the number of rules, as well as available filters and actions. These limitations negatively impact content blockers because modern content blockers are very sophisticated and employ layers of algorithms to not only detect and block ads, but to hide from the ad networks themselves.” Mozilla further shared that it does not have any immediate plans to remove blocking WebRequest API. “We have no immediate plans to remove blocking webRequest and are working with add-on developers to gain a better understanding of how they use the APIs in question to help determine how to best support them,” Mozilla wrote in the announcement. However, Mozilla is willing to consider other changes that are proposed in Manifest V3. It is planning to implement the proposal that requires content scripts to have the same permissions as the pages where they get injected. Read the official announcement to know more in detail about Mozilla’s plans regarding Manifest V3. Other news in web JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3 #Reactgate forces React leaders to confront community’s toxic culture head on Google Chrome 76 now supports native lazy-loading
Read more
  • 0
  • 0
  • 18819

article-image-google-chrome-experiment-crashes-browsers-thousands-it-admins-worldwide
Sugandha Lahoti
18 Nov 2019
4 min read
Save for later

Google Chrome 'secret' experiment crashes browsers of thousands of IT admins worldwide

Sugandha Lahoti
18 Nov 2019
4 min read
On Thursday last week, thousands of IT admins were left aghast when their Google Chrome browsers went blank, the White Screen of Death, and effectively crashed the browser. This was because Google was silently experimenting with a new WebContents Occlusion feature. The WebContents Occlusion feature is designed to suspend Chrome tabs when you move other apps on top of them and reduce resource usage when the browser isn’t in use. This feature is expected to reduce battery usage (for Chrome and other apps running on the same machine). This feature had been under testing in Chrome Canary and Chrome Beta releases. However last week, Google decided to test it in the main stable release, so it could get more feedback on how it behaved. "The experiment/flag has been on in beta for ~5 months," said David Bienvenu, a Google Chrome engineer in a Chromium bug thread. "It was turned on for stable (e.g., M77, M78) via an experiment that was pushed to released Chrome Tuesday morning." The main issue was that this experiment was released silently to the stable release, without IT admins or users being warned about Google’s changes. Naturally, Chrome users were left confused and lashed out their anger and complaints on Google Chrome’s support forum. Business users who were affected included those that run Chrome on Windows Server "terminal server" environments and on Citrix servers. Due to browser-crashing, employees working in tightly controlled enterprise environments were unable to switch browsers impacting business-critical functionality. After multiple complaints from businesses and users, Google rolled back the change late on Thursday night. “I’ll rollback the launch of this experiment and try to figure out how to deal with Citrix,” noted Bienvenu in the bug thread. Later a new Chrome configuration file was pushed out to users. "I believe it's more of a pull than a push thing," Bienvenu said, "so once the update is live on the Google servers, the next time you launch Chrome, you should get the new config. Google's Chrome experiment left ID admins confused Many IT admins were also angry that they’ve wasted valuable resources and time on trying to fix issues in their environment thinking it was their own fault. “We spent the better part of yesterday trying to determine if an internal change had occurred in our environment without our knowledge”, wrote an angry user. “We did not realize this type of event could occur on Chrome unbeknownst to us. We are already discussing alternative options, none of them are great, but this is untenable.", writes an angry user. Others urged Google that they should be allowed to opt out of any Google Chrome experiments. “Would like to be excluded from further experimental changes. We have had the sporadic white screen of deaths over the past few weeks. How would we have ever known it was a part of the 1%?  We chalked it off as bad Chrome profiles. We still have fresh memories of the experimental Chrome sound issue. That was very disruptive too. Please test your changes in your internal rdsh/Citrix environment. Please give us the option to opt out of "experimental" changes.  Thank you for your consideration.” Another said, “We've been having random issues for quite some time, and our agents could be in this 1%. This last one was a huge impact on our customer-facing agents, not to mention working all day yesterday and today of troubleshooting. Is there a way to be excluded from these experimental changes? If Chrome is going to be an enterprise browser, we need stability.” With Google Chrome’s mishap, more people are advocating moving to different browsers that give more control to its end users. Chrome also came under fire recently when it started experimenting with Manifest V3 extension in Chrome 80 Canary build. Chrome’s ad-blocking changes received overwhelmingly negative feedback as it can stop the working of many popular ad-blockers. Other browsers are also popping up now and then which offer better user privacy and ad-blocking features - Brave 1.0 being the latest in the line. Brave 1.0 releases with focus on user privacy, crypto currency-centric private ads and payment platform Google starts experimenting with Manifest V3 extension in Chrome 80 Canary build. Expanding Web Assembly beyond the browser with Bytecode Alliance, a Mozilla, Fastky, Intel and Red Hat partnership.
Read more
  • 0
  • 0
  • 18813

article-image-introducing-rex-js-v1-0-0-a-companion-library-for-regex-written-in-typescript
Prasad Ramesh
20 Nov 2018
2 min read
Save for later

Introducing ReX.js v1.0.0 a companion library for RegEx written in TypeScript

Prasad Ramesh
20 Nov 2018
2 min read
ReX.js is a helper library written in TypeScript for writing Regular Expressions. Yesterday, ReX.js v1.0.0, the first major version was released. Being written in TypeScript, it provides great autocompletion and development experience across various modern code editors. One of the main advantages of using ReX.js is its ability to document every line of code without hassles. Anatomy of  ReX.js v1.0.0 ReX.js is structured as namespace consisting of the following modules: Matcher: It is the class used to construct and use matching expressions. Replacer: The Replacer class is used to construct and use replacement expressions. Operation: This class represents a basic operation that is applied to expressions constructors. Parser: The parser class used to parse and execute Regexps. It is used by Matcher and implements polyfills for named groups and partially for look behinds. ReXer: It is used to construct Regexps. The Matcher and Replacer classes inherit from ReXer. The GitHub page says that the Matcher and Replacer classes will be used more likely by developers. The other classes would more likely be used for extendability and advanced use cases. Advanced use of ReX.js v1.0.0 Beyond basic Regex operations, ReX.js also provides options for extending its functionality. Operations and channels Every method used in ReX.js is just adding a new Operation to ReXer. An Operation can then be stringified using its own stringify method. A concept of channels is introduced to construct linear Regexps from nested function expressions. A channel is simply an array of Operations. The channels themselves are stored as an array in ReXer. Snippets Snippets are available if you want to reuse any kind of Operation configuration. Snippets provide an option to assign the given config to a name for later reuse. Methods and extensions Methods are ways to reuse and apply custom operations while extensions are just arrays of methods. Installing ReX.js v1.0.0 ReX.js is available on NPM as a package. You can include it in your current project by using: npm install @areknawo/rex If you’re using Yarn, then use the following command: yarn add @areknawo/rex For more details and documentation, visit the ReX.js GitHub page. Manipulating text data using Python Regular Expressions (regex) Introducing Howler.js, a Javascript audio library with full cross-browser support low.js, a Node.js port for embedded systems
Read more
  • 0
  • 0
  • 18799

article-image-next-js-7-a-framework-for-server-rendered-react-applications-releases-with-support-for-react-context-api-and-webassembly
Savia Lobo
20 Sep 2018
4 min read
Save for later

Next.js 7, a framework for server-rendered React applications, releases with support for React context API and Webassembly

Savia Lobo
20 Sep 2018
4 min read
Yesterday, the Next.js researchers announced that the latest version--v7-- of its React Framework is now production-ready. The Next.js 7 has had 26 canary releases and 3.4 million downloads so far. Alongwith the 7th version release, they have also launched a completely redesigned nextjs.org. This version is power-packed with faster boot and re-compilation improvements, better error reporting, static CDN support and much more. Key highlights of the Next.js 7 DX Improvements The Next.js 7 includes many significant improvements to the build and debug pipelines. With the inclusion of webpack 4, Babel 7 and improvements and optimizations on the codebase, Next.js can now boot up to 57% faster during development. Also, due to the new incremental compilation cache, any changes made by the user into the code will build 40% faster. While developing and building users will now see a better real time feedback with the help of webpackbar. Better error reporting with react-error-overlay Until now, users would render the error message and its stack trace. From this version, react-error-overlay has been used to enrich the stack trace with: Accurate error locations for both server and client errors Highlights of the source to provide context A full rich stack trace react-error-overlay makes it easy to open the text editor by just clicking on a specific code block. Upgraded compilation pipeline: Webpack 4 and Babel 7 Webpack 4 This version of Next.js is now powered by the latest webpack 4, with numerous improvements and bugfixes including: Support for .mjs source files Code splitting improvements Better tree-shaking (removal of unused code) support Another new feature is WebAssembly support. Here’s an example of how Next.js can even server-render WebAssembly. With webpack 4, a new way of extracting CSS from bundles called mini-extract-css-plugin is introduced. @zeit/next-css, @zeit/next-less, @zeit/next-sass, and @zeit/next-stylus are now powered by mini-extract-css-plugin. Babel 7 Next.js 7 now uses the stable version of Babel (Babel 7). For a full list of changes in Babel 7, head over to its release notes. Some of the main features of Babel 7 are: Typescript support, for Next.js you can use @zeit/next-typescript Fragment syntax <> support babel.config.js support overrides property to apply presets/plugins only to a subset of files or directories Standardized Dynamic Imports Starting with Next.js 7, it no longer has the default import() behavior. This means users get full import() support out of the box. This change is fully backwards-compatible as well. Making use of a dynamic component remains as simple as: import dynamic from 'next/dynamic' const MyComponent = dynamic(import('../components/my-component')) export default () => {  return <div>    <MyComponent />  </div> } Static CDN support With Next.js 7 the directory structure of .next is changed to match the url structure: https://cdn.example.com/_next/static/<buildid>/pages/index.js // mapped to: .next/static/<buildid>/pages/index.js While researchers also recommend using the proxying type of CDN, this new structure allows users of a different type of CDN to upload the .next directory to their CDN. Smaller initial HTML payload As Next.js pre-renders HTML, it wraps pages into a default structure with <html>, <head>, <body> and the JavaScript files needed to render the page. This initial payload was previously around 1.62kB. With Next.js 7 the initial HTML payload has been optimized, it is now 1.5kB, a 7.4% reduction, making your pages leaner. React Context with SSR between App and Pages Starting from Next.js 7 there is support for the new React context API between pages/_app.js and page components. Previously it was not possible to use React context in between pages on the server side. The reason for this was that webpack kept an internal module cache instead of using require.cache. The Next.js developers have written a custom webpack plugin that changes this behavior to share module instances between pages. In doing so users can not only use the new React context but also reduce Next.js's memory footprint when sharing code between pages. To know more about these and other features in detail, visit the Next.js 7 blog. low.js, a Node.js port for embedded systems Browser-based Visualization made easy with the new P5.js Deno, an attempt to fix Node.js flaws, is rewritten in Rust  
Read more
  • 0
  • 0
  • 18732
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-bokeh-1-0-released-with-a-new-scatter-patches-with-holes-and-testing-improvements
Sugandha Lahoti
26 Oct 2018
3 min read
Save for later

Bokeh 1.0 released with a new scatter, patches with holes, and testing improvements

Sugandha Lahoti
26 Oct 2018
3 min read
Bokeh has released their first stable version. Bokeh is an interactive visualization library that targets modern web browsers for presentation. Bokeh 1.0 marks the progress of making Bokeh a truly independent project in the context of a wider OSS community. Bokeh 1.0 comes with new features and other fixes and improvements. These include fixing patches with holes, a new scatter, JSON export and embed etc. Patches With Holes Patches with holes are often useful for working with GIS data or maps, and support all the usual and expected hover and hit-testing interactions. They are also helpful in filling contour plots. The Patches with holes approach adds a new glyph type MultiPolygons, inspired by GeoJSON format of sub-polygons. The GeoJSON specifies an "exterior ring" followed by optional "holes" inside the exterior ring. Source: Bokeh Github A New Scatter Scatter marker type is now parameterizable in the Bokeh 1.0 release. The scatter glyph method creates a new Scatter object, that can specify the marker type of each data point individually. This approach with a parameterized scatter is useful to keep all the data inside a single ColumnDataSource. This capability is especially useful together with a new factor_marker transform that can map categorical values to marker types. A new function to bokeh.embed Bokeh 1.0 adds a new function to bokeh.embed. This function can be called on any Bokeh object, e.g plots or layouts, and the output of the call is a block of JSON that represents a Bokeh Document for obj. This JSON output can be used in any HTML document by calling a single function from JavaScript: Bokeh.embed.embed_item(item, "myplot") The first parameter is the JSON output and the second parameter is the id of the div to embed the content into. Testing improvements Bokeh unit tests can now run continuously on Windows. Their Selenium integration testing machinery has also been rebuilt and expanded. Almost 200 Selenium tests can run continuously to explicitly exercise various Bokeh features and behaviors. These are just a select few updates. For full details, see the CHANGELOG and Release Notes. If you are using Anaconda, Bokeh can easily be installed by executing the command conda install -c bokeh bokeh. Otherwise, use pip install bokeh. How to create a web designer resume that lands you a Job. Is your web design responsive? “Be objective, fight for the user, and test with real users on the go!” – Interview with design purist, Will Grant
Read more
  • 0
  • 0
  • 18635

article-image-severity-issues-raised-for-python-2-debian-packages-for-not-supporting-python-3
Fatema Patrawala
16 Oct 2019
5 min read
Save for later

Severity issues raised for Python 2 Debian packages for not supporting Python 3

Fatema Patrawala
16 Oct 2019
5 min read
On Monday, Neil Williams a software developer from Linux CodeHelp raised severity issues for Python 2 leaf packages in Debian which do not support Python 3. Neil has urged Debian maintainers to remove Python 2 from all the Debian packages. He specifically mentions one of the packages, Calibre, an e-book management software which is completely open source and licensed under the GNU GPL v3. Calibre is written primarily in Python with some C/C++ code for speed and system interfacing. But it is not yet compatible with Python 3 as it requires at least Python 2.7.9. In 2017, an issue was raised on the Calibre platform by a user, “Python 2 is retiring in thirty months. Calibre needs to convert to Python 3.” Kovid Goyal, author of the Calibre platform responded saying, “No, it doesn't. I am perfectly capable of maintaining python 2 myself. Far less work than migrating the entire calibre codebase.” Now the latest Calibre version requires Python modules which are no longer available for Python 2. Gregor Riepl, a systems engineer in response to Neil says, “As of now, calibre is not of sufficient quality to be part of a Debian release and until it drops all Python2 requirements, it must be considered RC buggy.” This means that Calibre >= 4.0 for the foreseeable future will not be available in Debian. Calibre version 3.48 will be the last version that can run on Debian until the upstream Calibre switches to Python 3. Riepl further asked Neil if his quality argument is due to the Calibre authors resistance to migrate to Python 3. Neil responded, “No, it is based just on the removal of Python2 from Debian and avoiding special cases. Right now, any and every package in Debian testing which requires Python2 and has no Python3 alternative in Debian or ready for upload is of poor quality for no other reason than that. All such packages are of such poor quality that the package should be removed from testing - in an orderly manner, leaf packages first. That is in the best interests of all users, despite what may or may not happen to any particular subset(s) of users.  The decision flow is easy - if the answer in each case is "no", then move on to the next and if you get to the bottom, the bug should be RC. * Has the package already been removed from testing? * Is a Python3-only version already in Debian? * Is a Python3-only version available upstream? * Does the package have any reverse dependencies? * If you get here, it is already too late, there have already been   enough warnings. Upgrade the bug to RC and get the package   auto-removed from testing.” Neil said he was aware of the history of Calibre and understood what would happen if it is no longer a part of Debian. But that did not matter as removal of Python 2 is more important for the next Debian release. He also believes that Calibre has a relatively large user base that doesn't know much or care about the Python 2 deprecation. User will simply perceive dropping Calibre as a bad move on Debian's side and rush towards other packages of significantly lower quality. He further concluded, “Calibre is nothing special - it's a Python2 leaf package like vland and tftpy and any one of far too many others. Calibre can stay in unstable - it will go FTBFS, of course, but that isn't a problem either, IMHO. It's calibre's problem - not Debian's problem. There's always the option of users installing the old Python2 stuff from Buster to keep calibre hobbling along. Debian is the higher priority here. Calibre would be nice to have but it does not deserve to cause delays on anybody else's voluntary effort. No package has that right.” Community feels Python 2 will result in unmaintained runtime and libraries in packages On Hacker News, users are discussing how Python foundation is pushing in packages to migrate to Python 3 that will result in Python 2 having an entire set of unmaintained runtime and libraries in the package repository. One user comments, “Historically, Debian hasn't particularly objected to packaging obsolete versions of programming languages without upstream support. I doubt anyone is checking for potential security problems in Algol 68 and Fortran 77 implementations that Debian ships, and I don't think the people using those packages are particularly inconvenienced by that. It seems a shame that the social pressure to persuade people to port their code to Python 3 means that Debian is going to have weaker support for 10-year-old Python than 40-year-old Fortran. In particular, there are ongoing efforts to try to make it the normal thing for scientists to make the programs they ran on their data available so that their results can be reproduced; aggressively dropping older programming language implementations rather gets in the way of that.” Another user responded, “This isn't about "languages". It's about software! Algol 68 and Fortran 77 may have stale (but maintained) compilers or interpreters in the package repository. Starting very soon - Python 2 will have an entire set of unmaintained runtime and libraries in the package repository. You know - actual, officially, unmaintained software! Unmaintained software that other packages, including Calibre in this example, further build on. Of course they're throwing this out.” Python 3.8 is now available with walrus operator, positional-only parameters support for Vectorcall, and more Core Python team confirms sunsetting Python 2 on January 1, 2020 PyPy will continue to support Python 2.7, even as major Python projects migrate to Python 3
Read more
  • 0
  • 0
  • 18629

article-image-chrome-78-beta-brings-the-css-properties-and-values-api-the-native-file-system-api-and-more
Bhagyashree R
23 Sep 2019
3 min read
Save for later

Chrome 78 beta brings the CSS Properties and Values API, the native file system API, and more!

Bhagyashree R
23 Sep 2019
3 min read
Last week, Google announced the release of Chrome 78 beta. Its stable version is scheduled to release in October this year. Chrome 78 will release with a couple of new APIs including the CSS Properties and Values API and Native File System API. Key updates in Chrome 78 beta The CSS Properties and Values API The Houdini’s CSS Properties and Values API will be supported in Chrome 78. The Houdini task force consists of engineers from Mozilla, Apple, Opera, Microsoft, HP, Intel, and Google. In CSS, developers can define user-controlled properties using CSS custom properties, also known as CSS variables. However, the CSS custom properties do have a few limitations that make them difficult to work with. The CSS Properties and Values API addresses these limitations by allowing the registration of properties that have a value type, an initial value, and a defined inheritance behavior. The Native File System API Chrome 78 will support the Native File System API, which will enable web applications to interact with files on the user’s local device like IDEs, photo and video editors, text editors, and more. After permission to access local files is received, the API will allow web applications to read or save changes directly to files and folders on the user’s device. The SMS Receiver API Websites send a randomly generated one-time-password (OTP) to verify a phone number. This way of verification is cumbersome as it requires a user to manually enter or copy and paste the password into a form. Starting with Chrome 78, users will be able to skip this manual interaction completely with the help of the SMS Receiver API. It provides websites an ability to programmatically obtain OTPs from SMS as a solution “to ease the friction and failure points of manual user input of SMS codes, which is prone to error and phishing.” Origin trials Chrome 78 introduces origin trials that allow developers to try new features and share their feedback on “usability, practicality, and effectiveness to the web standards community.” Developers can register to enable an origin trial feature for all users on their origin for a fixed period of time. To know what features are available as an origin trial, check out the Origin Trials dashboard. Among the deprecations are, disallowing synchronous XHR during page dismissal and the removal of XSS Auditor. On a discussion on Hacker News, users were skeptical about the new Native File System API. A user commented, “I’m not sure about how to think about the file system API. On one hand, is great to see that secure file system access is possible in-browser, which allows most electron apps to be converted into PWAs. That’s great, I no longer need to run 5 different chromium instances. On the other hand, I’m really not sure if I like the future of editing Microsoft Office documents in the browser. I heavily believe that apps should have an integrated UX (with appropriate OS-specific widgets) because it allows coherency and familiarity.” To know what else is coming in Chrome 78, check out the official announcement by Google. Other news in Web Development Safari Technology Preview 91 gets beta support for the WebGPU JavaScript API and WSL New memory usage optimizations implemented in V8 Lite can also benefit V8 GitHub updates to Rails 6.0 with an incremental approach
Read more
  • 0
  • 0
  • 18490

article-image-apple-releases-safari-13-with-dark-mode-support-fido2-compliant-usb-security-keys-support
Bhagyashree R
20 Sep 2019
3 min read
Save for later

Apple releases Safari 13 with opt-in dark mode support, FIDO2-compliant USB security keys support, and more!

Bhagyashree R
20 Sep 2019
3 min read
Yesterday, Apple released Safari 13 for iOS 13, macOS 10.15 (Catalina), macOS Mojave, and macOS High Sierra. This release comes with opt-in dark mode support, FIDO2-compliant USB security keys support, updated Intelligent Tracking Prevention, and much more. Key updates in Safari 13 Desktop-class browsing for iPad users Starting with Safari 13, iPad users will have the same browsing experience as macOS users. In addition to displaying websites same as the desktop Safari, it will also provide the same capabilities including more keyboard shortcuts, a download manager with background downloads, and support for top productivity websites. Updates related to authentication and passwords Safari 13 will prompt users to strengthen their passwords when they sign into a website. On macOS, users will able to use FIDO2-compliant USB security keys in Safari. Also, support is added for “Sign in With Apple” to Safari and WKWebView. Read also: W3C and FIDO Alliance declare WebAuthn as the web standard for password-free logins Security and privacy updates A new permission API is added for DeviceMotionEvent and DeviceOrientationEvent on iOS. The DeviceMotionEvent class encapsulates details like the measurements of the interval, rotation rate, and acceleration of a device. Whereas, the DeviceOrientationEvent class encapsulates the angles of rotation (alpha, beta, and gamma) in degrees and heading. Other updates include updated third-party iframes to prevent them from automatically navigating the page. Intelligent Tracking Prevention is updated to prevent cross-site tracking through referrer and link decoration. Performance-specific updates While using Safari 13, iOS users will find that the initial rendering time for web pages is reduced. The memory consumption by JavaScript including for non-web clients is also reduced. WebAPI updates Safari 13 comes with a new Pointer Events API to enable consistent access to mouse, trackpad, touch, and Apple Pencil events. It also supports the Visual Viewport API that adjusts web content to avoid overlays, such as the onscreen keyboard. Deprecated features in Safari 13 WebSQL and Legacy Safari Extensions are no longer supported. To replace your previously provided Legacy Safari Extensions, Apple provides two options. First, you can configure your Safari App Extension to provide an upgrade path that will automatically remove the previous Legacy Safari Extension when it is installed. Second, you can manually convert your Legacy Safari Extension to a Safari App Extension. In a discussion on Hacker News, users were pleased with the support for the Pointer Events API. A user commented, “The Pointer Events spec is a real joy. For example, if you want to roll your own "drag" event for a given element, the API allows you to do this without reference to document or a parent container element. You can just declare that the element currently receiving pointer events capture subsequent pointer events until you release it. Additionally, the API naturally lends itself to patterns that can easily be extended for multi-touch situations.” Others also expressed their concern regarding the deprecation of Legacy Safari Extensions. A user added, “It really, really is a shame that they removed proper extensions. While Safari never had a good extension story, it was at least bearable, and in all other regards its simply the best Mac browser. Now I have to take a really hard look at switching back to Firefox, and that would be a downgrade in almost every regard I care about. Pity.” Check out the official release notes of Safari 13 to know more in detail. Other news in web development New memory usage optimizations implemented in V8 Lite can also benefit V8 5 pitfalls of React Hooks you should avoid – Kent C. Dodds Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users
Read more
  • 0
  • 0
  • 18437
article-image-next-js-8-releases-with-a-serverless-mode-better-build-time-memory-usage-and-more
Bhagyashree R
12 Feb 2019
3 min read
Save for later

Next.js 8 releases with a serverless mode, better build-time memory usage, and more

Bhagyashree R
12 Feb 2019
3 min read
After releasing Next.js 7 in September last year, the team behind Next.js released the production-ready Next.js 8, yesterday. This release comes with a serverless mode, build-time memory usage reduction, prefetch performance improvements, security improvements, and more. Similar to previous releases, all the updates are backward compatible. The following are some of the updates Next.js 8 comes with: Serverless mode The serverless deployment comes with various benefits including more reliability, scalability, and separation of concerns by splitting an application into smaller parts. These smaller parts are also called as lambdas. To provide these benefits of serverless deployment to Next.js users, this version comes with a serverless mode in which each page in the ‘page’ directory will be treated as a lambda. It will also come with low-level APIs for implementing serverless deployment. Better build-time memory usage The Next.js team, with the Webpack team, has worked towards improving the build performance and resource utilization of Next.js and Webpack. This collaboration has resulted in up to 16 times better memory usage with no degradation in performance. This improvement ensures that memory gets released much more quickly and no processes crash under stress. Prefetch performance improvements Next.js supports prefetching pages for faster navigation. Earlier, users were required to inject a ‘script’ tag into the document ‘body’, which caused an overhead while opening pages. In Next.js 8, the ‘prefetch’ attribute uses link rel=”preload” instead of a 'script' tag. Now the prefetching start after onload to allow the browser to manage resources. In addition to removing the overhead, this version also disables prefetch on slower network connections by detecting 2G internet and navigator.connection.saveData mode. Security improvements In this version, a new ‘crossOrigin’ config option is introduced to ensure that all ‘script’ tags have the ‘cross-origin’ set. Also, with this new config option, you do not require ‘pages/_document.js’ to set up cross-origin in your application. Another security improvement includes removing the inline JavaScript. In previous versions, users were required to include script-src 'unsafe-inline' in their policy to enable Content Security Policy. This was done because Next.js was creating an inline ‘script’ tag to pass data. In this version, the inline script tag is changed to a JSON tag for safe transfer to the client. This essentially means Next.js no longer includes no inline scripts anymore. To read about other updates introduced in Next.js 8, check out its official announcement. Next.js 7, a framework for server-rendered React applications, releases with support for React context API and Webassembly 16 JavaScript frameworks developers should learn in 2019 Nuxt.js 2.0 released with a new scaffolding tool, Webpack 4 upgrade, and more!
Read more
  • 0
  • 0
  • 18431

article-image-react-devtools-4-0-releases-with-support-for-hooks-experimental-suspense-api-and-more
Bhagyashree R
16 Aug 2019
3 min read
Save for later

React DevTools 4.0 releases with support for Hooks, experimental Suspense API, and more!

Bhagyashree R
16 Aug 2019
3 min read
Yesterday, the React team announced the release of React DevTools 4.0 for Chrome, Firefox, and Edge. In addition to better performance and navigation experience, this release fully supports React Hooks and provides a way to test the experimental Suspense API. Key updates in React DevTools 4.0 Better performance by reducing the “bridge traffic” The React DevTools extension is made up of two parts: frontend and backend. The frontend portion includes the components tree, the Profiler, and all the other things that are visible to you. On the other hand, the backend portion is the one that is invisible. This portion is in charge of notifying the frontend by sending messages through a “bridge”. In previous versions of React DevTools, the traffic caused by this notification process was one of the biggest performance bottlenecks. Starting with React DevTools 4.0, the team has tried to reduce this bridge traffic by minimizing the amount of messages sent by the backend to render the Components tree. The frontend can request more information whenever required. Automatically logs React component stack warnings React DevTools 4.0 now provides an option to automatically append component stack information to the console in the development phase. This will enable developers to identify where exactly in the component tree failure has happened. To disable this feature just navigate to the General settings panel and uncheck the “Append component stacks to warnings and errors.” Source: React Components tree updates Improved hooks support: Hooks allow you to use state and other React features without writing a class. In React DevTools 4.0, hooks have the same level of support as props and state. Component filters: Navigating through large component trees can often be tiresome. Now, you can easily and quickly find the component you are looking for by applying the component filters. "Rendered by" list and an owners tree: React DevTools 4.0 now has a new "rendered by" list in the right-hand pane that will help you quickly step through the list of owners. There is also an owners tree, the inverse of the "rendered by" list, which lists all the things that have been rendered by a particular component. Suspense toggle: The experimental Suspense API allows you to “suspend” the rendering of a component until a condition is met. In <Suspense> components you can specify the loading states when components below it are waiting to be rendered. This DevTools release comes with a toggle to let you test these loading states. Source: React Profiler changes Import and export profiler data: The profiler data can now be exported and shared among other developers for better collaboration. Source: React Reload and profile: React profiler collects performance information each time the application is rendered. This helps you identify and rectify any possible performance bottlenecks in your applications. In previous versions, DevTools only allowed profiling a “profiling-capable version of React.” So, there was no way to profile the initial mount of an application. This is now supported with a "reload and profile" action. Component renders list: The profiler in React DevTools 4.0 displays a list of each time a selected component was rendered during a profiling session. You can use this list to quickly jump between commits when analyzing a component’s performance. You can check out the release notes of React DevTools 4.0 to know what other features have landed in this release. React 16.9 releases with an asynchronous testing utility, programmatic Profiler, and more React Native 0.60 releases with accessibility improvements, AndroidX support, and more React Native VS Xamarin: Which is the better cross-platform mobile development framework?
Read more
  • 0
  • 0
  • 18366

article-image-microsoft-officially-releases-microsoft-edge-canary-builds-for-macos-users
Bhagyashree R
21 May 2019
3 min read
Save for later

Microsoft officially releases Microsoft Edge canary builds for macOS users

Bhagyashree R
21 May 2019
3 min read
Yesterday, Microsoft officially made the canary builds of Chromium-based Microsoft Edge available for macOS 10.2 and above. This announcement follows the release of canary and developer previews of Microsoft Edge for Windows 10 users last month. https://twitter.com/MSEdgeDev/status/1130624513035591680 Edge for Mac already surfaced online, earlier this month. A Twitter user, who goes by the name WalkingCat, shared download links for the developer and canary builds even before the official release. https://twitter.com/h0x0d/status/1125607963282948096 Updates in Microsoft Edge for macOS The macOS preview build comes with essentially the same features as the Windows one but is tweaked according to macOS conventions. These updates include changes in the fonts, menus, keyboard shortcuts, title casing, and other areas. This macOS version has rounded corners for tabs, which Microsoft plans to bring to Windows as well. Microsoft has also taken advantage of the macOS hardware features to provide user experiences exclusive to macOS. The macOS exclusive features include website shortcuts, tab switching, and video controls via the Touch Bar. Users will be able to access these features through the familiar navigation with Mac trackpad gestures. Source: Microsoft With this release, Microsoft aims to provide web developers with a consistent platform across different operating systems. This version comes with support for Progressive Web Apps that you can debug using the browser developer tools. Microsoft in the announcement wrote, “For the first time, web developers can now test sites and web apps in Microsoft Edge on macOS and be confident that those experiences will work the same in the next version of Microsoft Edge across all platforms.” Microsoft Edge Insider Channels Similar to Windows 10, the macOS preview builds will be made available through three preview channels: Dev, Beta, and Canary, that are collectively called Microsoft Edge Insider Channels. Source: Microsoft Canary builds are the ones that will receive updates every night. Developer builds are much more stable than the Canary builds and will be updated weekly. Beta builds are the most stable ones when compared to the three and will receive updates every 6 weeks. Right now only the Canary Channel is open, from which you can download the canary builds of Microsoft Edge. Microsoft says that the Dev channel builds will be available “very soon” to run alongside the canary version. You can share your feedback with Microsoft via the “Send feedback” smiley. To know more in detail, visit Microsoft’s Blog. Microsoft makes the first preview builds of Chromium-based Edge available for testing Microsoft confirms replacing EdgeHTML with Chromium in Edge
Read more
  • 0
  • 0
  • 18280
article-image-node-js-and-js-foundation-announce-intent-to-merge-developers-have-mixed-feelings
Bhagyashree R
05 Oct 2018
3 min read
Save for later

Node.js and JS Foundation announce intent to merge; developers have mixed feelings

Bhagyashree R
05 Oct 2018
3 min read
Yesterday, the Linux Foundation announced that the Node.js Foundation and JS Foundation have agreed to possibly create a joint organization. Currently, they have not made any formal decisions regarding the organizational structure. They clarified that joining forces will not change the technical independence or autonomy for Node.js or any of the 28 JS Foundation projects such as Appium, ESLint, or jQuery. A Q&A session will be held at Node+JS Interactive from 7:30 am to 8:30 am PT, October 10 at West Ballroom A to answer questions and get community input on the possible structure of a new Foundation. Why are Node.js and JS Foundations considering merging? The idea of this possible merger came from a need for a tighter integration between both foundations to provide greater support for Node.js and a broader range of JavaScript projects. JavaScript is continuously evolving and being used for creating applications ranging from web, desktops, and mobile. This calls for increased collaboration in the JavaScript ecosystem to sustain continued and healthy growth. What are the goals of this merger? Following are few of the goals of this merge aimed at benefiting the broad Node.js and JavaScript communities: To provide enhanced operational excellence Streamlined member engagement Increased collaboration across the JavaScript ecosystem and affiliated standards bodies This “umbrella” project structure will bring stronger collaboration across all JavaScript projects With a single, clear home available for any project in the JavaScript ecosystem, projects won’t have to choose between the JS and Node.js ecosystems. Todd Moore, Node.js Board Chairperson and IBM VP Opentech, believes this merger will provide improved support to contributors: “The possibility of a combined Foundation and the synergistic enhancements this can bring to end users is exciting. Our ecosystem can only grow stronger and the Foundations ability to support the contributors to the many great projects involved improve as a result.” How are developers feeling about this potential move? Not many developers are happy about this merger, which led to a discussion on Hacker News yesterday. One of the developers feels that the JS Foundation has been neglecting their responsibility towards many open source projects. They have also seen a reduction in funding and alienated many long-time contributors. According to him, this step could be “a last-ditch effort to retain some sort of relevancy.” On the other hand, one of the developers feels positive about this merge: “The JS Foundation is already hosting a lot of popular projects that run in back-end and build/CI environments -- webpack, ESLint, Esprima, Grunt, Intern, JerryScript, Mocha, QUnit, NodeRed, webhint, WebDriverIO, etc. Adding Node.JS itself to the mix would seem to make a lot of sense.” What we think of this move? This merger, if it happens, could unify the fragmented Javascript ecosystem bringing some much-needed relief to developers. It could also bring together sponsor members of the likes Google, IBM, Intel, and others to support the huge number of JavaScript open source projects. We must add that we find this move as a reaction to the growing popularity of Python, Rust, and WebAssembly, all invading and challenging JavaScript as the preferred web development ecosystem. If you have any questions regarding the merger, you can submit them through this Google Form provided by the two foundations. Read the full announcement at the official website of The Linux Foundation and also check out the announcement by Node.js on Medium. Node.js announces security updates for all their active release lines for August 2018 Why use JavaScript for machine learning? The top 5 reasons why Node.js could topple Java
Read more
  • 0
  • 0
  • 18266

article-image-mapbox-introduces-martini-a-client-side-terrain-mesh-generation-code
Vincy Davis
16 Aug 2019
3 min read
Save for later

Mapbox introduces MARTINI, a client-side terrain mesh generation code

Vincy Davis
16 Aug 2019
3 min read
Two days ago, Vladimir Agafonkin, an engineer at Mapbox introduced a client-side terrain mesh generation code, called MARTINI, short for ‘Mapbox's Awesome Right-Triangulated Irregular Networks Improved’. It uses a Right-Triangulated Irregular Networks (RTIN) mesh, which consists of big right-angle triangles to render smooth and detailed terrain in 3D. RTIN has two advantages such as: The algorithm generates a hierarchy of all approximations of varying precision, thus enabling quick retrieving. It is very fast making it feasible for client-side meshing from raster terrain tiles. In a blog post, Agafonkin demonstrates a drag and zoom terrain visualization for users to adjust mesh precision in real time. The terrain visualization also displays the number of triangles generated with an error rate. Image Source: Observable How does the RTIN Hierarchy work Mapbox's MARTINI uses the RTIN algorithm which has a size of (2k+1) x (2k+1) grids, “that's why we add 1-pixel borders on the right and bottom”, says Agafonkin. The RTIN algorithm initiates an error map where a grid of error values guides the following mesh retrieval. The error map indicates the user if a certain triangle has to be split or not, by taking the height error value into account. The RTIN algorithm first calculates the error approximation of the smallest triangles, which is then propagated to the parent triangles. This process is repeated until the top two triangles’ errors are calculated and a full error map is produced. This process results in zero T-junctions and thus no gaps in the mesh. Image Source: Observable For retrieving a mesh, RTIN hierarchy starts with two big triangles, which is then subdivided to approximate, according to the error map. Agafonkin says, “This is essentially a depth-first search in an implicit binary tree of triangles, and takes O(numTriangles) steps, resulting in nearly instant mesh generation.” Users have appreciated the Mapbox's MARTINI demo and animation presented by Agafonkin in the blog post. A user on Hacker News says, “This is a wonderful presentation of results and code, well done! Very nice to read.” Another user comments, “Fantastic. Love the demo and the animation.” Another comment on Hacker News reads, “This was a pleasure to play with on an iPad. Excellent work.” For more details on the code and algorithm used in Mapbox's MARTINI, check out Agafonkin’s blog post. Introducing Qwant Maps: an open source and privacy-preserving maps, with exclusive control over geolocated data Top 7 libraries for geospatial analysis Using R to implement Kriging – A Spatial Interpolation technique for Geostatistics data
Read more
  • 0
  • 0
  • 18245
Modal Close icon
Modal Close icon