Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Web Development

354 Articles
article-image-babel-7-released-with-typescript-and-jsx-fragment-support
Sugandha Lahoti
28 Aug 2018
3 min read
Save for later

Babel 7 released with Typescript and JSX fragment support

Sugandha Lahoti
28 Aug 2018
3 min read
Babel 7 has been released after 3 years of wait after Babel 6. Babel is a JavaScript compiler. It is mainly used to convert ECMAScript 2015+ code into a backward compatible version of JavaScript. Babel gives developers the freedom to use the latest JavaScript syntax without developers worrying about backward compatibility. It has been going strong in the Javascript ecosystem. There are currently over 1.3 million dependent repos on GitHub, 17 million downloads on npm per month, and hundreds of users including many major frameworks (React, Vue, Ember, Polymer), and companies (Facebook, Netflix, Airbnb).   Major Breaking Changes All major changes can be done automatically with the new babel-upgrade tool. babel-upgrade is a new tool that automatically makes upgrade changes: currently with dependencies in package.json and .babelrc config. Drop support for un-maintained Node versions: 0.10, 0.12, 4, 5 Introducing @babel namespace to differentiate official packages, so babel-core becomes @babel/core. Deprecation of any yearly presets (preset-es2015, etc). Dropped the "Stage" presets (@babel/preset-stage-0, etc) in favor of opting into individual proposals. Some packages have renames: any TC39 proposal plugin will now be -proposalinstead of -transform. So @babel/plugin-transform-class-properties becomes @babel/plugin-proposal-class-properties. Introduced a peerDependency on @babel/core for certain user-facing packages (e.g. babel-loader, @babel/cli, etc). Typescript and JSX fragment support Babel 7 now ships with TypeScript support. Babel will now get the benefits of TypeScript like catching typos, error checking, and fast editing experiences.  It will enable JavaScript users to take advantage of gradual typing. Install the Typescript plugin as npm install --save-dev @babel/typescript The JSX fragment support in Babel 7 allows returning multiple children from a component’s render method. Fragments look like empty JSX tags. They let you group a list of children without adding extra nodes to the DOM. Speed improvements Babel 7 includes changes to optimize the code as well as accept patches from the v8 team. It is also part of the Web Tooling Benchmark alongside many other great JavaScript tools. There are changes to the loose option of some plugins. Moreover transpiled ES6 classes are annotated with a /*#__PURE__*/ comment that gives a hint to minfiers like Uglify and babel-minify for dead code elimination. What’s Next There are a lot of new features in the works: plugin ordering, better validation/errors, speed, re-thinking loose/spec options, caching, using Babel asynchronously, etc. You can check out the roadmap doc for a more detailed version. These are just a select few updates. The entire changes are available on the Babel blog. TypeScript 3.0 is finally released with ‘improved errors’, editor productivity and more The 5 hurdles to overcome in JavaScript Tools in TypeScript  
Read more
  • 0
  • 0
  • 17081

article-image-introducing-kweb-a-kotlin-library-for-building-rich-web-applications
Bhagyashree R
10 Dec 2018
2 min read
Save for later

Introducing Kweb: A Kotlin library for building rich web applications

Bhagyashree R
10 Dec 2018
2 min read
Kweb is a library using which you can easily build web applications in the Kotlin programming language. It basically eliminates the separation between browser and server from the programmer’s perspective. This means that events that only manipulate the DOM don't need to do a server-roundtrip. As Kweb is written in Kotlin, users should have some familiarity with the Kotlin and Java ecosystem. Kweb allows you to keep all of the business logic in the server-side and enables the communication with the web browser through efficient websockets. To efficiently handle asynchronicity, it takes advantage of Kotlin’s powerful new coroutines mechanism. It also allows keeping consistent state across client and server by seamlessly conveying events between both. What are the features of Kweb? Makes the barrier between the web server and web browser mostly invisible to the programmer. Minimizes the server-browser chatter and browser rendering overhead. Supports integration with some powerful JavaScript libraries like Semantic, which is a UI framework designed for theming. Allows binding DOM elements in the browser directly to state on the server and automatically update them through the observer and data mapper patterns. Seamlessly integrates with Shoebox, a Kotlin library for persistent data storage that supports views and the observer pattern. Easily add to an existing project. Instantly update your web browser in response to code changes. The Kweb library is distributed via JitPack, a novel package repository for JVM and Android projects. Kweb takes advantage of the fact that in most web apps, logic occurs in the server side and the client can’t be trusted. This library is in its infancy but still works well enough to demonstrate that the approach is practical. You can read more about Kweb on its official website. Kotlin based framework, Ktor 1.0, released with features like sessions, metrics, call logging and more Kotlin 1.3 released with stable coroutines, multiplatform projects and more KotlinConf 2018: Kotlin 1.3 RC out and Kotlin/Native hits beta
Read more
  • 0
  • 0
  • 17041

article-image-haproxy-introduces-stick-tables-for-server-persistence-threat-detection-and-collecting-metrics
Bhagyashree R
24 Sep 2018
3 min read
Save for later

HAProxy shares how you can use stick tables for server persistence, threat detection, and collecting metrics

Bhagyashree R
24 Sep 2018
3 min read
Yesterday, HAProxy published an article discussing stick tables, an in-memory storage. Introduced in 2010, it allows you to track client activities across requests, enables server persistence, and collects real-time metrics. It is supported in both the HAProxy Community and Enterprise Edition. You can think of stick tables as a type of key-value store. The key here represents what you track across requests, such as a client IP, and the values are the counters that, for the most part, HAProxy takes care of calculating for you. What are the common use cases of stick tables? StackExchange realized that along with its core functionality, server persistence, stick tables can also be used for many other scenarios. They sponsored its developments and now stick tables have become an incredibly powerful subsystem within HAProxy. Stick tables can be used in many scenarios; however, its main uses include: Server persistence Stick tables were originally introduced to solve the problem of server persistence. HTTP requests are stateless by design because each request is executed independently, without any knowledge of the requests that were executed before it. These tables can be used to store a piece of information, such as an IP address, cookie, or range of bytes in the request body, and associate it with a server. Next time when HAProxy sees new connections using the same piece of information, it will forward the request on to the same server. This way it can help in tracking user activities between one request and add a mechanism for storing events and categorizing them by client IP or other keys. Bot detection We can use stick tables to defend against certain types of bot threats. It finds its application in preventing request floods, login brute force attacks, vulnerability scanners, web scrapers, slow loris attacks, and many more. Collecting metrics With stick tables, you can collect metrics to understand what is going on in HAProxy, without enabling logging and having to parse the logs. In this scenario Runtime API is used, which can read and analyze stick table data from the command line, a custom script or executable program. You can visualize this data using any dashboard of your choice. You can also use the fully-loaded dashboard, which comes with HAProxy Enterprise Edition for visualizing stick table data. These were a few of the use cases where stick tables can be used. To get a clear understanding of stick tables and how they are used, check out the post by HAProxy. Update: Earlier the article said, "Yesterday (September 2018), HAProxy announced that they are introducing stick tables." This was incorrect as pointed out by a reader, stick tables have been around since 2010. The article is now updated to reflect the same.    Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial] How to create a standard Java HTTP Client in ElasticSearch Why is everyone going crazy over WebAssembly?
Read more
  • 0
  • 0
  • 16989

article-image-introducing-mint-a-new-http-client-for-elixir
Amrata Joshi
26 Feb 2019
2 min read
Save for later

Introducing Mint, a new HTTP client for Elixir

Amrata Joshi
26 Feb 2019
2 min read
Yesterday, the team at Elixir introduced Mint as their new low-level HTTP client that provides a small and functional core. It is connection based where each connection is a single structure with an associated socket belonging to the process that started the connection. Features of Mint Connections The HTTP connections of Mint are managed directly in the process that starts the connection. There is no connection pool which is used when a connection is opened. This helps users to build their own process structure that fits their application. Each connection has a single immutable data structure that the users can manage. Mint uses “active mode” sockets so the data and events from the socket are sent as messages to the process that started the connection. The user then passes the messages to the stream/2 function that further returns the updated connection and a list of “responses”. These responses get streamed back and the response is returned in partial response chunks. Process-less To many users, Mint may seem to be more cumbersome to use than other HTTP libraries. But by providing a low-level API without a predetermined process architecture, Mint gives flexibility to the user of the library. If a user writes GenStage pipelines, a pool of producers can fetch data from external sources via HTTP. With Mint, it is possible to have GenStage producer for managing its own connection while reducing overhead and simplifying the code. HTTP/1 and HTTP/2 The Mint.HTTP module has a single interface for both HTTP/1 and HTTP/2 connections which also performs version negotiation on HTTPS connections. Users can now specify HTTP version for choosing Mint.HTTP1 or Mint.HTTP2modules directly. Safe-by-default HTTPS When connecting with HTTPS, Mint performs certificate verification by default. Mint also uses an optional dependency on CAStore for providing certificates from Mozilla’s CA Certificate Store. Few users are happy about this news with  one user commenting on HackerNews, “I like that Mint keeps dependencies to a minimum.” Another user commented, “I'm liking the trend of designing runtime-behaviour agnostic libraries in Elixir.” To know more about this news, check out Mint’s official blog post. Elixir 1.8 released with new features and infrastructure improvements Elixir 1.7, the programming language for Erlang virtual machine, releases Elixir Basics – Foundational Steps toward Functional Programming  
Read more
  • 0
  • 0
  • 16915

Matthew Emerick
21 Aug 2020
3 min read
Save for later

Introduction to props in React from ui.dev's RSS Feed

Matthew Emerick
21 Aug 2020
3 min read
Whenever you have a system that is reliant upon composition, it’s critical that each piece of that system has an interface for accepting data from outside of itself. You can see this clearly illustrated by looking at something you’re already familiar with, functions. function getProfilePic (username) { return 'https://photo.fb.com/' + username } function getProfileLink (username) { return 'https://www.fb.com/' + username } function getAvatarInfo (username) { return { pic: getProfilePic(username), link: getProfileLink(username) } } getAvatarInfo('tylermcginnis') We’ve seen this code before as our very soft introduction to function composition. Without the ability to pass data, in this case username, to each of our of functions, our composition would break down. Similarly, because React relies heavily on composition, there needs to exist a way to pass data into components. This brings us to our next important React concept, props. Props are to components what arguments are to functions. Again, the same intuition you have about functions and passing arguments to functions can be directly applied to components and passing props to components. There are two parts to understanding how props work. First is how to pass data into components, and second is accessing the data once it’s been passed in. Passing data to a component This one should feel natural because you’ve been doing something similar ever since you learned HTML. You pass data to a React component the same way you’d set an attribute on an HTML element. <img src='' /> <Hello name='Tyler' /> In the example above, we’re passing in a name prop to the Hello component. Accessing props Now the next question is, how do you access the props that are being passed to a component? In a class component, you can get access to props from the props key on the component’s instance (this). class Hello extends React.Component { render() { return ( <h1>Hello, {this.props.name}</h1> ) } } Each prop that is passed to a component is added as a key on this.props. If no props are passed to a component, this.props will be an empty object. class Hello extends React.Component { render() { return ( <h1>Hello, {this.props.first} {this.props.last}</h1> ) } } <Hello first='Tyler' last='McGinnis' /> It’s important to note that we’re not limited to what we can pass as props to components. Just like we can pass functions as arguments to other functions, we’re also able to pass components (or really anything we want) as props to other components. <Profile username='tylermcginnis' authed={true} logout={() => handleLogout()} header={<h1>👋</h1>} /> If you pass a prop without a value, that value will be set to true. These are equivalent. <Profile authed={true} /> <Profile authed />
Read more
  • 0
  • 0
  • 16864

article-image-will-putting-limits-on-how-much-javascript-is-loaded-by-a-website-help-prevent-user-resource-abuse
Bhagyashree R
31 Jan 2019
3 min read
Save for later

Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?

Bhagyashree R
31 Jan 2019
3 min read
Yesterday, Craig Hockenberry, who is a Partner at The Iconfactory, reported a bug on WebKit, which focuses on adding a limit on how much JavaScript code a website can load to avoid resource abuse of user computers. Hockenberry feels that though content blocking has helped in reducing the resource abuse and hence providing better performance and better battery life, there are few downsides of using content blockers. His bug report said, “it's hurting many smaller sites that rely on advertising to keep the lights on. More and more of these sites are pleading to disable content blockers.” This results in collateral damage to smaller sites. As a solution to this, he suggested that we need to find a way to incentivize JavaScript developers who keep their codebase smaller and minimal. “Great code happens when developers are given resource constraints... Lack of computing resources inspires creativity”, he adds. As an end result, he believes that we can allow sites to show as many advertisements as they want, but keeping the overall size under a fixed amount. He believes that we can also ask users for permission by adding a simple dialog box, for example, "The site example.com uses 5 MB of scripting. Allow it?” This bug report triggered a discussion on Hacker News, and though few users agreed to his suggestion most were against it. Some developers mentioned that users usually do not read the dialogs and blindly click OK to get the dialog to go away. And, even if users read the dialog, they will not be knowing how much JavaScript code is too much. “There's no context to tell her whether 5MB is a lot, or how it compares to payloads delivered by similar sites. It just expects her to have a strong opinion on a subject that nobody who isn't a coder themselves would have an opinion about,” he added. Other ways to prevent JavaScript code from slowing down browsers Despite the disagreement, developers do agree that there is a need for user-friendly resource limitations in browsers and some suggested the other ways we can prevent JavaScript bloat. One of them said it is good to add resource-limiting tabs on CPU usage, number of HTTP requests and memory usage: “CPU usage allows an initial burst, but after a few seconds dial down to max ~0.5% of CPU, with additional bursts allowed after any user interaction like click or keyboard) Number of HTTP requests (again, initial bursts allowed and in response to user interaction, but radically delay/queue requests for the sites that try to load a new ad every second even after the page has been loaded for 10 minutes) Memory usage (probably the hardest one to get right though)” Another user adds, “With that said, I do hope we're able to figure out how to treat web "sites" and web "apps" differently - for the former, I want as little JS as possible since that just gets in the way of content, but for the latter, the JS is necessary to get the app running, and I don't mind if its a few megabytes in size.” You can read the bug reported on WebKit Bugzilla. D3.js 5.8.0, a JavaScript library for interactive data visualizations in browsers, is now out! 16 JavaScript frameworks developers should learn in 2019 npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn
Read more
  • 0
  • 0
  • 16755
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-brave-launches-its-brave-ads-platform-sharing-70-of-the-ad-revenue-with-its-users
Bhagyashree R
25 Apr 2019
4 min read
Save for later

Brave launches its Brave Ads platform sharing 70% of the ad revenue with its users

Bhagyashree R
25 Apr 2019
4 min read
In January this year, Brave announced that it is previewing its new advertising feature, Brave Ads. It opened this feature to all users of its desktop browser for macOS, Windows, and Linux yesterday. Brave Ads is an opt-in digital advertising feature built with user privacy in mind. https://twitter.com/brave/status/1121081425254473728 Previously, we have seen many pay-to-surf sites, but most of them eventually disappeared because of the dot-com bubble. However, Brendan Eich, the CEO, and co-founder of Brave software is pretty confident about his plan. He said, “With Brave Ads, we are launching a digital ad platform that is the first to protect users’ data rights and to reward them for their attention.” He further adds, “Brave Ads also aims to improve the economics and conversion of the online advertising industry, so that publishers and advertisers can thrive without the intermediaries that collect huge fees and that contribute to web-wide surveillance. Privacy by design and no tracking are integral to our mission to fix the Web and its funding model.” Brave is working with various ad networks and brands to create Brave ads catalog inventory. These catalogs are pushed to available devices on a recurring basis. The ads for these catalogs are supplied by Vice, Home Chef, Ternio BlockCard, MyCrypto, eToro, BuySellAds, TAP Network, AirSwap, Fluidity, and Uphold. How Brave Ads work? Brave is based on Chromium that blocks tracking scripts and other technologies that spy on your online activity. So advertisements are generally not shown by default when one uses the Brave browser. Now, Brave Ads puts users in control by allowing them to decide how many ads they would like to see. It ensures user privacy by doing ad matching directly on the users’ device so that their personal data is not leaked to anyone. Out of the revenue generated by viewing these ads, users will get a 70% share and the remaining 30% will go to Brave. This 70% percent cut is estimated to be about $5 per month according to Eich. Users will be paid with Brave’s bitcoin-style "cryptocurrency” called Basic Attention Tokens (BAT). Users can claim these tokens at the close of every Brave Rewards monthly cycle. To view Brave Ads, users are required to enable Brave Rewards by going to the Brave Rewards “setting” page (brave://rewards/). Those who are already using Brave Rewards will get a notification screen to enable this feature. Once a user opts into Brave Rewards, they are presented with offers in the form of notifications. When users click on these notifications, they will be directed to a full page ad in a new ad tab. Right now, users can auto-contribute their earned rewards to their favorite websites or content creators. The browser will soon allow users to use BAT for premium content and also redeem it for real-world rewards such as hotel stays, restaurant vouchers, and gift cards. It also plans to bring an option that will let users convert their BAT into local fiat currency through exchange partners. Brave Ads have received a very mixed reaction from the users. While some compare its advertising model with that of YouTube, others think that the implementation is unethical. One user on Reddit commented, “This idea is very interesting. It reminds me of how YouTube shares their ad revenue with content creators, and that in turn grows YouTube's network and business...The more one browsed or shared of their data, the more one would get paid. It's simple business.” A skeptical user said, “I'm a fan of Brave's mission, and the browser itself is great (basically Chromium but faster), but the practice of hiding publisher's ads but showing their own, which may or may not end up compensating the publisher, seems fairly unethical.” For more details, check out the official announcement by Brave. Brave introduces Brave Ads that share 70% revenue with users for viewing ads Brave Privacy Browser has a ‘backdoor’ to remotely inject headers in HTTP requests: HackerNews Brave 0.55, ad-blocking browser with 22% faster load time and is generally available and works on Chromium
Read more
  • 0
  • 0
  • 16705

article-image-firefox-67-will-come-with-faster-and-reliable-javascript-debugging-tools
Bhagyashree R
20 May 2019
3 min read
Save for later

Firefox 67 will come with faster and reliable JavaScript debugging tools

Bhagyashree R
20 May 2019
3 min read
Last week, the Firefox DevTools Debugger team shared the recent updates in Firefox DevTools to make debugging of modern apps more consistent. They have also worked on making the debugger more predictable and capable of understanding common tools in web development like webpack, Babel, and TypeScript. These updates are ready for trying out in Firefox 67, which is planned to be released tomorrow (May 21). The team also shared that Firefox 68 will come with a more “polished” version of these features. https://twitter.com/FirefoxDevTools/status/1129066199017353216 Today, every browser comes with a powerful suite of developer tools that allows you to easily inspect and debug your web applications. These tools enable you to do a bunch of things like inspecting currently-loaded JavaScript, editing pages on-the-fly, quickly diagnosing problems, and more. The Firefox team has introduced many improvements and updates to these tools and here are some of the highlights: Revamped source map support Source maps provide a way to keep your client-side code readable and debuggable even after combining and minifying it. The new debugger comes with revamped support for source maps that now “perfects the illusion that you’re debugging your code, not the compiled output from Babel, Webpack, TypeScript, vue.js, etc.” To help developers generate correct source maps, the team and the community has contributed patches to build tools like Babel, a JavaScript compiler and configurable transpiler. Predictable breakpoints for effortless pausing and stepping This improved debugger architecture solves several issues that developers were commonly facing like lost breakpoints, pausing in the wrong script, or stepping through pretty-printed code. Now, they will also be able to easily debug minified scripts, arrow functions, and chained method calls with the help of inline breakpoints. Console debugging with logpoints Developers often resort to console logging (using console.log statements for printing messages to the console) when they want to quickly observe their program’s flow without having to pause the execution. However, this way of debugging can become quite tedious. This is why starting from Firefox 67, developers will have a new breakpoint called ‘logpoint’ that dynamically injects ‘console.log()’ statements into your running application. Better debugging for JavaScript Workers A web worker is a script that runs in the background without having any effect on the main execution thread of a web application. It takes care of all the laborious processing allowing the main thread to run without being slowed down. Firefox will now come with an updated Threads panel through which you will be able to switch between contexts and also independently pause different execution contexts. This will allow workers and their scripts to be debugged within the same Debugger panel. These were some of the highlights from the long list of updates and improvements. Check out the official announcement by Mozilla to know more in detail. Mozilla developers have built BugBug which uses machine learning to triage Firefox bugs Mozilla adds protection against fingerprinting and Cryptomining scripts in Firefox Nightly and Beta Mozilla is exploring ways to reduce notification permission prompt spam in Firefox
Read more
  • 0
  • 0
  • 16673

article-image-llvm-webassembly-backend-will-soon-become-emscriptens-default-backend-v8-announces
Bhagyashree R
02 Jul 2019
3 min read
Save for later

LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces

Bhagyashree R
02 Jul 2019
3 min read
Yesterday, the team behind V8, an open source JavaScript engine, shared the work they with the community have been doing to make LLVM WebAssembly the default backend for Emscripten. LLVM is a compiler framework and Emscripten is an LLVM-to-Web compiler. https://twitter.com/v8js/status/1145704863377981445 The LLVM WebAssembly backend will be the third backend in Emscripten. The original compiler was written in JavaScript which used to parse LLVM IR in text form. In 2013, a new backend was written called Fastcomp by forking LLVM, which was designed to emit asm.js. It was a big improvement in code quality and compile times. According to the announcement the LLVM WebAssembly backend beats the old Fastcomp backend on most metrics. Here are the advantages this backend will come with: Much faster linking The LLVM WebAssembly backend will allow incremental compilation using WebAssembly object files. Fastcomp uses LLVM Intermediate Representation (IR) in bitcode files, which means that at the time of linking the IR would be compiled by LLVM. This is why it shows slower link times. On the other hand, WebAssembly object files (.o) already contain compiled WebAssembly code, which accounts for much faster linking. Faster and smaller code The new backend shows significant code size reduction as compared to Fastcomp.  “We see similar things on real-world codebases that are not in the test suite, for example, BananaBread, a port of the Cube 2 game engine to the Web, shrinks by over 6%, and Doom 3 shrinks by 15%!,” shared the team in the announcement. The factors that account for the faster and smaller code is that LLVM has better IR optimizations and its backend codegen is smart as it can do things like global value numbering (GVN). Along with that, the team has put their efforts in tuning the Binaryen optimizer which also helps in making the code smaller and faster as compared to Fastcomp. Support for all LLVM IR While Fastcomp could handle the LLVM IR generated by clang, it often failed on other sources. On the contrary, the LLVM WebAssembly backend can handle any IR as it uses the common LLVM backend infrastructure. New WebAssembly features Fastcomp generates asm.js before running asm2wasm. This makes it difficult to handle new WebAssembly features like tail calls, exceptions, SIMD, and so on. “The WebAssembly backend is the natural place to work on those, and we are in fact working on all of the features just mentioned!,” the team added. To test the WebAssembly backend you just have to run the following commands: emsdk install latest-upstream emsdk activate latest-upstream Read more in detail on V8’s official website. V8 7.2 Beta releases with support for public class fields, well-formed JSON.stringify, and more V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more Google’s V8 7.2 and Chrome 72 gets public class fields syntax; private class fields to come soon
Read more
  • 0
  • 0
  • 16647

article-image-brave-ad-blocker-gives-69x-better-performance-with-its-new-engine-written-in-rust
Bhagyashree R
27 Jun 2019
3 min read
Save for later

Brave ad-blocker gives 69x better performance with its new engine written in Rust

Bhagyashree R
27 Jun 2019
3 min read
Looks like Brave has also jumped on the bandwagon of writing or rewriting its components in the Rust programming language. Yesterday, its team announced that they have reimplemented its ad-blocker in Rust that was previously written in C++. As a result, the ad-blocker is now 69x faster as compared to the current engine. The team chose Rust as it is a memory-safe and performant language. The new ad-blocker implementation can be compiled to native code and run within the native browser core. It can also be packaged in a standalone Node.js module. This reimplemented version is available on Brave’s  Dev channel and Nightly channel. How does this new ad-blocking algorithm work? The previous ad-blocking algorithm relied on the observation that most of the requests were passed through without blocking. It used the Bloom filter data structure that tracks fragments of requests that may match and rules out those that do not. The new implementation is based on uBlock Origin and Ghostery’s ad-blocking approach, which is tokenization specific to add-block rule matching against URLs and rule evaluation optimized to the different kinds of rules. What makes this new algorithm faster is that it quickly eliminates any rules that are not likely to match a request from search. “To organize filters in a way that speeds up their matching, we observe that any alphanumeric (letters and numbers) substring that is part of a filter needs to be contained in any matching URL as well,” the team explained. All these substrings are hashed to a single number that results in a number of tokens. The tokens make matching much easier and faster when a URL is tokenized in the same way. The team further wrote, “Even though by nature of hashing algorithms multiple different strings could hash to the same number (a hash collision), we use them to limit rule evaluation to only those that could possibly match.” If a rule has a specific hostname, it is tokenized too. If a rule contains a single domain option, the entire domain is hashed as another token. Performance gains made by the reimplementation For the performance evaluation, the team has used the dataset published with the Ghostery ad-blocker performance study that includes 242,945 requests across 500 popular websites. The new ad-blocker was tested against this dataset with different ad-block rule lists including the biggest one: EasyList and EasyPrivacy combined.  The team performed all the benchmarks on the adblock-rust 0.1.21 library. They used a 2018 MacBook Pro laptop with 2.6 GHz Intel Core i7 CPU and 32GB RAM. Following are performance gains this new ad-blocker showed: The new algorithm with its optimized set of rules is 69x faster on average as compared to the current engine. When tested with the popular filter list combination of EasyList and EasyPrivacy, it gave a “class-leading performance of spending only 5.7μs on average per request.” It already supports most of the filter rule syntax that has evolved beyond the original specification. This will enable the team to handle web compatibility issues better and faster. The browser does some of the work that can be helpful to the ad-blocker. This further reduces the overheads resulting in an ad-blocker with the best in class performance. Head over to Brave’s official website to know more in detail. Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event Brave introduces Brave Ads that share 70% revenue with users for viewing ads Chromium-based Brave browser shows 22% faster page load time than its Muon-based counterpart  
Read more
  • 0
  • 0
  • 16494
article-image-microsoft-confirms-replacing-edgehtml-with-chromium-in-edge
Prasad Ramesh
07 Dec 2018
2 min read
Save for later

Microsoft confirms replacing EdgeHTML with Chromium in Edge

Prasad Ramesh
07 Dec 2018
2 min read
Earlier this week it was reported that Microsoft is ditching EdgeHTML for Chromium in the Windows 10 default browser, Edge. Now Microsoft has confirmed this officially yesterday in a blog post. The blog post by Joe Belfiore, VP of Windows stated: “we intend to adopt the Chromium open source project in the development of Microsoft Edge on the desktop to create better web compatibility for our customers and less fragmentation of the web for all web developers.” What does this shift to Chromium mean? Gradually, over the course of 2019, Edge will have under the hood changes. These changes will be developed in open source and the key aspects are: The development of Microsoft Edge will move to a Chromium-compatible web platform for the desktop version. They intend to align Microsoft Edge simultaneously with web standards and also with other Chromium-based browsers. This improves compatibility for everyone and make testing easier for developers. Working on an open-source engine like Chromium allows Microsoft to deliver more frequent updates to Edge. Microsoft Edge is currently available on Windows, this shift can get Edge running on other OSes like Linux and macOS. Microsoft also intends to contribute more to the open-source engine Chromium to make Chromium-based browsers better on Windows devices. A user doesn't have to worry much about this change. If anything this might bring Chrome-like extensions to Edge. If you’re a web developer, you can go to the Microsoft Insider website to try preview builds and contribute. Currently, Chrome holds arguably most of the market share in the browser space. Microsoft had problems working with EdgeHTML and building a browser that would be widely adopted. Perhaps basing Edge on Chromium will actually make people want to use Chrome. Now two tech behemoths will use the same engine to create their browser. This could mean more competition within the Chromium ecosystem. Where does this leave Mozilla Firefox that uses the Gecko engine and Opera that uses Blink? For more details about the engine shift, visit the Microsoft website. Microsoft reportedly ditching EdgeHTML for Chromium in the Windows 10 default browser Firefox Reality 1.0, a browser for mixed reality, is now available on Viveport, Oculus, and Daydream Microsoft becomes the world’s most valuable public company, moves ahead of Apple
Read more
  • 0
  • 0
  • 16356

article-image-introducing-zero-server-a-zero-configuration-server-for-react-node-js-html-and-markdown
Bhagyashree R
27 Feb 2019
2 min read
Save for later

Introducing Zero Server, a zero-configuration server for React, Node.js, HTML, and Markdown

Bhagyashree R
27 Feb 2019
2 min read
Developers behind the CodeInterview.io and RemoteInterview.io websites have come up with Zero, a web framework to simplify modern web development. Zero takes the overhead of the usual project configuration for routing, bundling, and transpiling to make it easier to get started. Zero applications consist of static and code files. Static files are all non-code files like images, documents, media files, etc. Code files are parsed, bundled, and served by a particular builder for that file type. Zero supports Node.js, React, HTML, Markdown/MDX. Features in Zero server Autoconfiguration Zero eliminates the need for any configuration files in your project folder. Developers will just have to place their code and it will be automatically compiled, bundled, and served. File-system based routing The routing will be based on the file system, for example, if your code is placed in ‘./api/login.js’, it will be exposed at ‘http://domain.com/api/login’. Auto-dependency resolution Dependencies are automatically installed and resolved. To install a specific version of a package, developers just have to create their own package.json. Support for multiple languages Zero supports code written in multiple languages. So, with Zero, you can do things like exposing your TensorFlow model as a Python API, writing user login code in Node.js, all under a single project folder. Better error handling Zero isolate endpoints from each other by running them in their own process. This will ensure that if one endpoint crashes there is no effect on any other component of the application. For instance, if /api/login crashes, there will be no effect on /chatroom page or /api/chat API. It will also automatically restart the crashed endpoints when the next user visits them. To know more about the Zero server, check out its official website. Introducing Mint, a new HTTP client for Elixir Symfony leaves PHP-FIG, the framework interoperability group Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument  
Read more
  • 0
  • 0
  • 16295

article-image-scrivito-launches-serverless-javascript-cms
Kunal Chaudhari
17 Apr 2018
2 min read
Save for later

Scrivito launches serverless JavaScript CMS

Kunal Chaudhari
17 Apr 2018
2 min read
Scrivito, a SaaS-based Content Management Service, launched a new breed of cloud-based serverless JavaScript CMS which is specifically targeted towards medium to large sized businesses. While the world is shifting to cutting-edge cloud technology, web CMS platforms are still stuck in the past. Thomas Witt, Co-Founder, and CTO of Scrivito said that “We’re at a tipping point. Agencies and dev teams that stick with Wordpress and the like are doomed to be overtaken by the inevitable shift to serverless computing and JavaScript development.” Scrivito checks the boxes for key trending tech innovations in the web development space. Serverless? Yes. Cloud native? Yes. So what’s unique about this cutting-edge content management interface and how exactly does it differentiate itself from the other traditional CMS? Scrivito requires zero maintenance thanks to the cloud This is the most unique feature of Scrivito. Since it is a cloud-based service, it allows developers to spin up a CMS instance without having to re-install anything or reconfigure databases, search engine indexing, backups or metadata. This leads to no downtime, no software patches, and minimal maintenance efforts. Component reusability powered by ReactJS Scrivito is powered by Facebook’s popular frontend framework-React. Thanks to its reusable UI components and its flexibility, developers can create complex and interactive functionalities such as configurators or multi-page forms with ease. Not only built for developers, it also makes it easier for agencies and marketing teams to build, edit and manage secure, reliable and cost-effective sites, microsites, and landing pages. Scrivito is extendable Scrivito is easily extendable because it doesn’t require any infrastructure. Developers and editors can create their own widgets and data structures on the fly. Due to its unique working copies technology, it brings version control technologies from software development to the CMS world, thus eliminating the need for a staging server and allowing parallel editing of content across teams. Plus, its API-driven approach provides the benefits of a serverless and a headless CMS together with WYSIWYG editing in a single solution. Scrivito has certainly ignited a revolution in the web development space by introducing serverless technologies to CMS applications. It is available at different price points for personal and enterprise users. To know more about other features and pricing options, check out the project's official webpage.
Read more
  • 0
  • 0
  • 16294
article-image-mozilla-brings-back-firefoxs-test-pilot-program-with-the-introduction-of-firefox-private-network-beta
Bhagyashree R
11 Sep 2019
3 min read
Save for later

Mozilla brings back Firefox’s Test Pilot Program with the introduction of Firefox Private Network Beta

Bhagyashree R
11 Sep 2019
3 min read
Yesterday, Mozilla relaunched its Test Pilot Program for the second time, alongside the release of Firefox Private Network Beta. The Test Pilot Program provides Firefox users with a way to try out its newest features and share their feedback with Mozilla. Mozilla first introduced the Test Pilot Program as an add-on for Firefox 3.5 in 2009 and relaunched it in 2016. However, in January this year, it decided to close this program in the process of evolving its “approach to experimentation even further.” While the name is the same, the difference is that the features you will get to try now will be much more stable. Explaining the difference between this iteration of Test Pilot Program and the previous ones, the team wrote in the announcement, “The difference with the newly relaunched Test Pilot program is that these products and services may be outside the Firefox browser, and will be far more polished, and just one step shy of general public release.” Firefox Private Network Beta The first project available for beta testing under this iteration of the Test Pilot Program is Firefox Private Network. It is currently free and available to Firefox for desktop users in the United States only. Firefox Private Network is an opt-in, privacy-focused feature that gives users access to a private network when they are connected to a free and open Wi-Fi. It will encrypt the web addresses you visit and the data you share. Your data will be sent through a proxy service by Mozilla’s partner, Cloudflare. It will also mask your IP address to protect you from third-party trackers around the web. Source: Mozilla Read also: Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Users have already started testing the feature. A user on Hacker News shared, “I just got done testing this, it assigns a U.S. IPv6 address and uses the Cloudflare Warp network. My tests showed a very stable download speed of 150.3 Mbps and an upload speed of 13.8 Mbps with a latency of 31ms.” Another user commented, “I quite like the fact that once this goes mainstream, it'd help limit surveillance and bypass censorship on the web in one fell swoop without having to install or trust 3p other than the implicit trust in Mozilla and its partners (in this case, Cloudflare). Knowing Cloudflare, I'm sure this proxy is as much abt speed and latency as privacy and security.” Some users were also skeptical about the use of Cloudflare in this feature. “As much as I like the idea of baking better privacy tools into the browser, it's hard for me to get enthusiastic about the idea of making Cloudflare even more of an official man-in-the-middle for all network traffic than they already are,” a user added. Others also recommended to try Tor proxy instead, “I'd like to point out though, that, one could run a Tor proxy (it also has a VPN mode) on their phones [0] today to work around censorship and surveillance; anonymity is a bit tricky over tor-as-a-proxy. The speeds over Tor are decent and nothing you can't tolerate whilst casual web browsing. It is probably going to be free forever unlike Firefox's private network.” Read also: The Tor Project on browser fingerprinting and how it is taking a stand against it Read Mozilla’s official announcement to know more in detail. Other news in web development Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more GitHub updates to Rails 6.0 with an incremental approach Wasmer’s first Postgres extension to run WebAssembly is here!
Read more
  • 0
  • 0
  • 16286

article-image-drupal-9-will-be-released-in-2020-shares-dries-buytaert-drupals-founder
Bhagyashree R
14 Dec 2018
2 min read
Save for later

Drupal 9 will be released in 2020, shares Dries Buytaert, Drupal’s founder

Bhagyashree R
14 Dec 2018
2 min read
At Drupal Europe 2018, Dries Buytaert, the founder and lead developer of the Drupal content management system announced that Drupal 9 will be released in 2020. Yesterday, he shared a much detailed timeline for Drupal 9, according to which it is planned to release on June 3, 2020. One of the biggest dependency of Drupal 8 is Symfony 3 and it is scheduled to reach its end-of-life by November 21. This means that no security bugs in Symfony 3 will be fixed and people have to move to Drupal 9 for better support and security. Going by the plan, the site owners will have at least one year to upgrade from Drupal 8 to Drupal 9. Drupal 9 will not have a separate code base, rather the team is adding new functionalities in Drupal 8 as backward-compatible code and experimental features. Once they are sure that these features are stable, any old functionalities will be deprecated. One of the most notable update will be, support for Symfony 4 or 5 in Drupal 9. Since, Symfony 5 is not yet released the scope of its changes will not be clear to the Drupal team. They are focusing on running Drupal 8 with Symfony 4. The final goal is to make Drupal 8 work with Symfony 3, 4 or 5 so that any issues encountered can be fixed before they start requiring Symfony 4 or 5 in Drupal 9. As Drupal 9 is being build in Drupal 8, this will make things much easier for every stakeholder. Drupal core contributors will just have to remove the deprecated functionalities and upgrade the dependencies. For site owners it will be much easier to upgrade to Drupal 9 than it was to upgrade to Drupal 8. Dries Buytaert in his post said, “Drupal 9 will simply be the last version of Drupal 8, with its deprecations removed. This means we will not introduce new, backwards-compatibility breaking APIs or features in Drupal 9 except for our dependency updates. As long as modules and themes stay up-to-date with the latest Drupal 8 APIs, the upgrade to Drupal 9 should be easy. Therefore, we believe that a 12- to 18-month upgrade period should suffice.” You can read the full announcement on Drupal's website. WordPress 5.0 (Bebo) released with improvements in design, theme and more 5 things to consider when developing an eCommerce website Introduction to WordPress Plugin
Read more
  • 0
  • 0
  • 16202
Modal Close icon
Modal Close icon