Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Web Development

354 Articles
article-image-mozilla-announces-final-four-candidates-that-will-replace-its-irc-network
Bhagyashree R
13 Sep 2019
4 min read
Save for later

Mozilla announces final four candidates that will replace its IRC network

Bhagyashree R
13 Sep 2019
4 min read
In April this year, Mozilla announced that it would be shutting down its IRC network stating that it creates “unnecessary barriers to participation in the Mozilla project.” Last week, Mike Hoye, the Engineering Community Manager at Mozilla, shared the four final candidates for Mozilla’s community-facing synchronous messaging system: Mattermost, Matrix/Riot.im, Rocket.Chat, and Slack. Mattermost is a flexible, self-hostable, open-source messaging platform that enables secure team collaboration. Riot.im is an open-source instant messaging client that is based on the federated Matrix protocol. Rocket.Chat is also a free and open-source team chat collaboration platform. The only proprietary option in the shortlisted messaging platform list is Slack, which is a widely used team collaboration hub. Read also: Slack stocks surges 49% on the first trading day on the NYSE after direct public offering Explaining how Mozilla shortlisted these messaging systems, Hoye wrote, “These candidates were assessed on a variety of axes, most importantly Community Participation Guideline enforcement and accessibility, but also including team requirements from engineering, organizational-values alignment, usability, utility and cost.” He said that though there were a whole lot of options to choose from these were the ones that best-suited Mozilla’s current institutional needs and organizational goals. Mozilla will soon be launching official test instances of each of the candidates for open testing. After the one month trial period, the team will be taking feedback in dedicated channels on each of those servers. You can also share your feedback in #synchronicity on IRC.mozilla.org and a forum on Mozilla’s community Discourse instance that the team will be creating soon. Mozilla's timeline for transitioning to the finalized messaging system September 12th to October 9th: Mozilla will be running the proof of concept trials and accepting community feedback. October 9th to 30th: It will discuss the feedback, draft a proposed post-IRC plan, and get approval from the stakeholders. December 1st:  The new messaging system will be started. March 1st, 2020: There will be a transition time for support tooling and developers starting from the launch to March 1st, 2020. After this Mozilla’s IRC network will be shut down. Hoye shared that the internal Slack instance will still be running regardless of the result to ensure smooth communication. He wrote, “Internal Slack is not going away; that has never been on the table. Whatever the outcome of this process, if you work at Mozilla your manager will still need to be able to find you on Slack, and that is where internal discussions and critical incident management will take place.” In a discussion on Hacker News, many rooted for Matrix. A user commented, “I am hoping they go with Matrix, least then I will be able to have the choice of having a client appropriate to my needs.” Another user added, “Man, I sure hope they go the route of Matrix! Between the French government and Mozilla, both potentially using Matrix would send a great and strong signal to the world, that matrix can work for everyone! Fingers crossed!” Many also appreciated that Mozilla chose three open-source messaging systems. A user commented, “It's great to see 3/4 of the options are open source! Whatever happens, I really hope the community gets behind the open-source options and don't let more things get eaten up by commercial silos cough slack cough.” Some were not happy that Zulip, an open-source group chat application, was not selected. “I'm sad to see Zulip excluded from the list. It solves the #1 issue with large group chats - proper threading. Nothing worse than waking up to a 1000 message backlog you have to sort through to filter out the information relevant to you. Except for Slack, all of their other choices have very poor threading,” a user commented. Check out the Hoye’s official announcement to know more in detail. Other news in web Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users Wasmer’s first Postgres extension to run WebAssembly is here! JavaScript will soon support optional chaining operator as its ECMAScript proposal reaches stage 3
Read more
  • 0
  • 0
  • 21181

article-image-introducing-howler-js-a-javascript-audio-library-with-full-cross-browser-support
Bhagyashree R
01 Nov 2018
2 min read
Save for later

Introducing Howler.js, a Javascript audio library with full cross-browser support

Bhagyashree R
01 Nov 2018
2 min read
Developed by GoldFire Studios, Howler.js is an audio library for the modern web that makes working with audio in JavaScript easy and reliable across all platforms. It defaults to Web Audio API and falls back to HTML5 Audio to provide support for all browsers and platforms including IE9 and Cordova. Originally, it was developed for an HTML5 game engine, but it can also be used just as well for any other audio related function in web applications. Features of Howler.js Single API for all audio needs: It provides a simple and consistent API to make it easier to build audio experiences in your application. Audio sprites: For more precise playback and lower resources. you can define and control segments of files with audio sprites. Supports all codecs: It supports all codecs such as MP3, MPEG, OPUS, OGG, OGA, WAV, AAC, CAF, M4A, MP4, WEBA, WEBM, DOLBY, FLAC. Auto-caching for improved performance: It automatically caches loaded sounds that can be reused on subsequent calls for better performance and bandwidth. Modular architecture: Its modular architecture helps you to easily use and extend the library to add custom features. Which browsers does it support? Howler.js is compatible with the following: Google Chrome 7.0+ Internet Explorer 9.0+ Firefox 4.0+ Safari 5.1.4+ Mobile Safari 6.0+ Opera 12.0+ Microsoft Edge Read more about Howler.js on its official website and also check out its GitHub repository. npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out The Ember project releases version 3.5 of Ember.js, Ember Data, and Ember CLI
Read more
  • 0
  • 0
  • 21153

article-image-wasmer-introduces-webassembly-interfaces-for-validating-the-imports-and-exports-of-a-wasm-module
Bhagyashree R
18 Jul 2019
2 min read
Save for later

Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module

Bhagyashree R
18 Jul 2019
2 min read
Yesterday, Syrus Akbary, the founder and CEO of Wasmer, introduced WebAssembly interfaces. It provides a convenient s-expression (symbolic expression) text format that can be used to validate the imports and exports of a Wasm module. Why WebAssembly Interfaces are needed? The Wasmer runtime initially supported only running Emscripten-generated modules and later on added support for other ABIs including WASI and Wascap. WebAssembly runtimes like Wasmer have to do a lot of checks before starting an instance. It does that to ensure a WebAssembly module is compliant with a certain Application Binary Interface (Emscripten or WASI). It checks whether the module imports and exports are what the runtime expects, namely the function signatures and global types match. These checks are important for: Making sure a module is going to work with a certain runtime. Assuring a module is compatible with a certain ABI. Creating a plugin ecosystem for any program that uses WebAssembly as part of its plugin system. The team behind Wasmer introduced WebAssembly Interfaces to ease this process by providing a way to validate imports and exports are as expected. This is how a WebAssembly Interface for WASI looks like: Source: Wasmer WebAssembly Interfaces allow you to run various programs with each ABI, such as Nginx (Emscripten) and Cowsay (WASI). When used together with WAPM (WebAssembly Package Manager), you will also be able to make use of the entire WAPM ecosystem to create, verify, and distribute plugins. They have also proposed it as a standard for defining a specific set of imports and exports that a module must have, in a way that is statically analyzable. Read the official announcement by Wasmer. Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more
Read more
  • 0
  • 0
  • 21132

article-image-github-parts-ways-with-jquery-adopts-vanilla-js-for-its-frontend
Bhagyashree R
07 Sep 2018
3 min read
Save for later

GitHub parts ways with JQuery, adopts Vanilla JS for its frontend

Bhagyashree R
07 Sep 2018
3 min read
GitHub has finally finished removing the JQuery dependency from its frontend code. This was a result of gradual decoupling from JQuery which began back in at least 2 years ago. They have chosen not to replace JQuery with yet another framework. Instead, they were able to make this transition with the help of polyfills that allowed them to use standard browser features such as, EventListener, fetch, Array.from, and more. Why GitHub chose JQuery in the beginning? Simple: GitHub started using JQuery 1.2.1 as a dependency in 2007. This enabled its web developers to create more modern and dynamic user experience. JQuery 1.2.1 allowed developers to simplify the process of DOM manipulations, define animations, and make AJAX requests. Its simple interface gave GitHub developers a base to craft extension libraries such as, pjax and facebox, which later became the building blocks for the rest of GitHub frontend. Consistent: Unlike the XMLHttpRequest interface, JQuery was consistent across browsers. GitHub in its early days chose JQuery as it allowed their small development team to quickly prototype and release new features without having to adjust code specifically for each web browser. Why they decided to remove JQuery dependency? After comparing JQuery with the rapid evolution of supported web standards in modern browsers, they observed that: CSS classname switching can be achieved using Element.classList. Visual animations can be created using CSS stylesheets without writing any JavaScript code. The addEventListeners method, which is used to attach an event handler to the document, is now stable enough for cross-platform use. $.ajax requests can be performed using the Fetch Standard. With the evolution of JavaScript, some syntactic sugar that jQuery provides has become redundant. The chaining syntax of JQuery didn’t satisfy how GitHub wanted to write code going forward. According to this announcement, this step of decoupling from jquery will allow them to: Rely on web standards more Have MDN web docs as their default documentation to refer Maintain more resilient code in future Speeding up page load times and JavaScript execution time Which technology is it using now? GitHub has moved from JQuery to vanilla JS (plain JavaScript). It is now using querySelectorAll, fetch for ajax, and delegated-events for event handling; polyfills for standard DOM manipulations, and Custom Elements. The adoption of Custom Elements is on the rise. It is a component library native to the browser, which means that users do not have to download, parse, and compile additional bytes of a framework. With the release of Web Components v1 in 2017, GitHub started to adopt Custom Elements on a wider scale. In future they are also planning to use Shadow DOM. To read more about how GitHub made this transition to using standard browser features, check out their official announcement. Github introduces Project Paper Cuts for developers to fix small workflow problems, iterate on UI/UX, and find other ways to make quick improvements Why Golang is the fastest growing language on GitHub
Read more
  • 0
  • 0
  • 21081

article-image-react-in-the-streets-d3-in-the-sheets-from-ui-devs-rss-feed
Matthew Emerick
28 Sep 2020
6 min read
Save for later

React in the streets, D3 in the sheets from ui.dev's RSS Feed

Matthew Emerick
28 Sep 2020
6 min read
Got a real spicy one for you today. Airbnb releases visx, Elder.js is a new Svelte framework, and CGId buttholes. Airbnb releases visx 1.0 Visualizing a future where we can stay in Airbnbs again Tony Robbins taught us to visualize success… Last week, Airbnb officially released visx 1.0, a collection of reusable, low-level visualization components that combine the powers of React and D3 (the JavaScript library, not, sadly, the Mighty Ducks). Airbnb has been using visx internally for over two years to “unify our visualization stack across the company.” By “visualization stack”, they definitely mean “all of our random charts and graphs”, but that shouldn’t stop you from listing yourself as a full-stack visualization engineer once you’ve played around with visx a few times. Why tho? The main sales pitch for visx is that it’s low-level enough to be powerful but high-level enough to, well, be useful. The way it does this is by leveraging React for the UI layer and D3 for the under the hood mathy stuff. Said differently - React in the streets, D3 in the sheets. The library itself features 30 separate packages of React visualization primitives and offers 3 main advantages compared to other visualization libraries: Smaller bundles: because visx is split into multiple packages. BYOL: Like your boyfriend around Dinner time, visx is intentionally unopinionated. Use any animation library, state management solution, or CSS-in-JS tools you want - visx DGAF. Not a charting library: visx wants to teach you how to fish, not catch a fish for you. It’s designed to be built on top of. The bottom line Are data visualizations the hottest thing to work on? Not really. But are they important? Also not really. But the marketing and biz dev people at your company love them. And those nerds esteemed colleagues will probably force you to help them create some visualizations in the future (if they haven’t already). visx seems like a great way to help you pump those out faster, easier, and with greater design consistency. Plus, there’s always a chance that the product you’re working on could require some sort of visualizations (i.e. tooltips, gradients, patterns, etc.). visx can help with that too. So, thanks Airbnb. No word yet on if Airbnb is planning on charging their usual 3% service fee on top of an overinflated cleaning fee for usage of visx, but we’ll keep you update. Elder.js 1.0 - a new Svelte framework Whoever Svelte it, dealt it dear Respect your generators… Elder.js 1.0 is a new, opinionated Svelte framework and static site generator that is very SEO-focused. It was created by Nick Reese in order to try and solve some of the unique challenges that come with building flagship SEO sites with 10-100k+ pages, like his site elderguide.com. Quick Svelte review: Svelte is a JavaScript library way of life that was first released ~4 years ago. It compiles all your code to vanilla JS at build time with less dependencies and no virtual DOM, and its syntax is known for being pretty simple and readable. So, what’s unique about Elder.js? Partial hydration: It hydrates just the parts of the client that need to be interactive, allowing you to significantly reduce your payloads and still maintain full control over component lazy-loading, preloading, and eager-loading. Hooks, shortcodes, and plugins: You can customize the framework with hooks that are designed to be “modular, sharable, and easily bundled in to Elder.js plugins for common use cases.” Straightforward data flow: Associating a data function in your route.js gives you complete control over how you fetch, prepare, and manipulate data before sending it to your Svelte template. These features (plus its tiny bundle sizes) should make a Elder.js a faster and simpler alternative to Sapper (the default Svelte framework) for a lot of use cases. Sapper is still probably the way to go if you’re building a full-fledged Svelte app, but Elder.js seems pretty awesome for content-heavy Svelte sites. The bottom line We’re super interested in who will lead Elder.js’s $10 million seed round. That’s how this works, right? JS Quiz - Answer Below Why does this code work? const friends = ['Alex', 'AB', 'Mikenzi'] friends.hasOwnProperty('push') // false Specifically, why does friends.hasOwnProperty('push') work even though friends doesn’t have a hasOwnProperty property and neither does Array.prototype? Cool bits Vime is an open-source media player that’s customizable, extensible, and framework-agnostic. The React Core Team wrote about the new JSX transform in React 17. Speaking of React 17, Happy 3rd Birthday React 16 😅. Nathan wrote an in-depth post about how to understand React rendering. Urlcat is a cool JavaScript library that helps you build URLs faster and avoid common mistakes. Is it cooler than the live-action CATS movie tho? Only one of those has CGId buttholes, so you tell us. (My search history is getting real weird writing this newsletter.) Everyone’s favorite Googler Addy Osmani wrote about visualizing data structures using the VSCode Debug Visualizer. Smolpxl is a JavaScript library for creating retro, pixelated games. There’s some strong Miniclip-in-2006 vibes in this one. Lea Verou wrote a great article about the failed promise of Web Components. “The failed promise” sounds like a mix between a T-Swift song and one of my middle school journal entries, but sometimes web development requires strong language, ok? Billboard.js released v2.1 because I guess this is now Chart Library Week 2020. JS Quiz - Answer const friends = ['Alex', 'AB', 'Mikenzi'] friends.hasOwnProperty('push') // false As mentioned earlier, if you look at Array.prototype, it doesn’t have a hasOwnProperty method. How then, does the friends array have access to hasOwnProperty? The reason is because the Array class extends the Object class. So when the JavaScript interpreter sees that friends doesn’t have a hasOwnProperty property, it checks if Array.prototype does. When Array.prototype doesn’t, it checks if Object.prototype does, it does, then it invokes it. const friends = ['Alex', 'AB', 'Mikenzi'] console.log(Object.prototype) /* constructor: ƒ Object() hasOwnProperty: ƒ hasOwnProperty() isPrototypeOf: ƒ isPrototypeOf() propertyIsEnumerable: ƒ propertyIsEnumerable() toLocaleString: ƒ toLocaleString() toString: ƒ toString() valueOf: ƒ valueOf() */ friends instanceof Array // true friends instanceof Object // true friends.hasOwnProperty('push') // false
Read more
  • 0
  • 0
  • 20956

article-image-electron-7-0-releases-in-beta-with-windows-on-arm-64-bit-faster-ipc-methods-nativetheme-api-and-more
Fatema Patrawala
24 Oct 2019
3 min read
Save for later

Electron 7.0 releases in beta with Windows on Arm 64 bit, faster IPC methods, nativetheme API and more

Fatema Patrawala
24 Oct 2019
3 min read
Last week the team at Electron announced the release of Electron 7.0 in beta. It includes upgrades to Chromium 78, V8 7.8, and Node.js 12.8.1. The team has added a Window on Arm 64 release, faster IPC methods, a new nativeTheme API, and much more. This release is published to npm under the beta tag and can be installed via npm install electron@beta, or npm i electron@7.0.0-beta.7. It is packed with upgrades, fixes, and new features. Notable changes in Electron 7.0 There are stack upgrades in this release, Electron 7.0 will be compatible on Chromium 78, V8 7.8 and Node.js 12.8.1. In this release they have added Windows on Arm (64 bit). The team has added ipcRenderer.invoke() and ipcMain.handle() for asynchronous request/response-style IPC. These are strongly recommended over the remote module. They have added nativeTheme API to read and respond to changes in the OS's theme and color scheme. In this release they have switched to a new TypeScript Definitions generator, which generates more precise definitions files (d.ts) from C# model classes to build strongly typed web application where the server- and client-side models are in sync. Earlier Electron used Doc Linter and Doc Parser but it had a few issues and hence shifted to TypeScript to make definition files better without losing any information on docs. Other breaking changes The team has removed deprecated APIs in this release: Callback-based versions of functions that now use Promises. Tray.setHighlightMode() (macOS). app.enableMixedSandbox() app.getApplicationMenu(), app.setApplicationMenu(), powerMonitor.querySystemIdleState(), powerMonitor.querySystemIdleTime(), webFrame.setIsolatedWorldContentSecurityPolicy(), webFrame.setIsolatedWorldHumanReadableName(), webFrame.setIsolatedWorldSecurityOrigin() Session.clearAuthCache() no longer allows filtering the cleared cache entries. Native interfaces on macOS (menus, dialogs, etc.) now automatically match the dark mode setting on the user's machine. The team has updated the electron module to use @electron/get. Node 8 is the minimum supported node version in this release. The electron.asar file no longer exists. Any packaging scripts that depend on its existence should be updated by the developers. Additionally the team announced that Electron 4.x.y has reached end-of-support as per the project's support policy. Developers and applications are encouraged to upgrade to a newer version of Electron. To know more about this release, check out the Electron 7.0 GitHub page and the official blog post. Electron 6.0 releases with improved Promise support, native Touch ID authentication support, and more Electron 5.0 ships with new versions of Chromium, V8, and Node.js The Electron team publicly shares the release timeline for Electron 5.0
Read more
  • 0
  • 0
  • 20887
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-is-full-stack-developer
Richard Gall
28 Mar 2018
3 min read
Save for later

What is a full-stack developer?

Richard Gall
28 Mar 2018
3 min read
Full stack developer has been named as one of the most common developer roles according to the latest stack overflow survey. But what exactly does a full stack developer do and what does a typical full stack developer job description look like? Full stack developers bridge the gap between the font end and back end Full stack developers deal with the full spectrum of development, from back end to front end development. They are hugely versatile technical professionals, and because they work on both the client and server side, they need to be able to learn new frameworks, libraries and tools very quickly. There’s a common misconception that full stack developers are experts in every area of web development. They’re not – they’re often generalists with broad knowledge that doesn’t necessarily run deep. However, this lack of depth isn’t necessarily a disadvantage. Because they have experience in both back end and front end development they know how to provide solutions to working with both. But most importantly, as Agile becomes integral to modern development practices, developers who are able to properly understand and move between front and back ends is vital. From an economic perspective it also makes sense – with a team of full-stack developers you have a team of people able to perform multiple roles. What a full stack developer job description looks like Every full-stack developer job description looks different. The role is continually evolving and different organizations will require different skills. Here are some of the things you’re likely to see: HTML / CSS JavaScript JavaScript frameworks like Angular or React Experience of UI and API design SQL and experience with other databases At least one backend programming language (python, ruby, java etc) Backend framework experience (for example, ASP.NET Core, Flask) Build and release management or automation tools such as Jenkins Virtualization and containerization knowledge (and today possibly serverless too) Essentially, it’s up to the individual to build upon their knowledge by learning new technologies in order to become an expert full stack developer. Full stack developers need soft skills But soft skills are also important for full-stack developers. Being able to communicate effectively, manage projects and stakeholders is essential. Of course, knowledge of Agile and Scrum are always in-demand; being collaborative is also vital, as software development is never really a solitary exercise. Similarly, commercial awareness is highly valued - a full stack developer that understands they are solving business problems, not just software problems is invaluable.
Read more
  • 0
  • 0
  • 20783

Matthew Emerick
31 Aug 2020
5 min read
Save for later

Ionic + Angular: Powering the App store and the web from Angular Blog - Medium

Matthew Emerick
31 Aug 2020
5 min read
Did you know Ionic and Angular power roughly 10% of the apps on iOS and almost 20% of apps on Android? Let’s repeat that: Angular powers a significant chunk of apps in the app stores. Why is it helpful to know this? Well, if you were on the fence about what technology choice you should make for your next app, it should be reassuring to know that apps powered by web technology are thriving in the app store. Let’s explore how we came to that conclusion and why it matters. First, for a number of reasons, users visit these stores and download apps that help them in their day-to-day lives. Users are searching for ToDo apps (who doesn’t love a good ToDo app), banking apps, work-specific apps and so much more. A good portion of these apps are built using web technologies such as Ionic and Angular. But enough talk, let’s look at some numbers to back this up. The Data If you’ve never heard of Appfigures, it’s an analytics tool that monitors and offers insights on more than 150,000 apps. Appfigures provides some great insight into what kind of tools developers are using to build their apps. Like what’s the latest messaging, mapping, or development SDK? That last one is the most important metric we want to explore. Let’s look at what the top development SDKs are for app development: Data from https://appfigures.com/top-sdks/development/apps Woah, roughly 10% of the apps on iOS and almost 20% of apps on Android use Ionic and Angular. This is huge. The data here is gathered by analyzing the various SDKs used in apps on the app stores. In these charts we see some options that are to be expected like Swift and Kotlin. But Ionic and Angular are still highly present. We could even include Cordova, since many Ionic apps are Cordova-based, and these stats would increase even more. But we’ll keep to the data that we know for sure. Given the number of apps out there, even 10% and 20% are a significant size. If you ignore Appfigures, you can get a sense of how many Ionic/Angular apps are there by just searching for “com.ionicframework”, which is our starting package ID (also, people should really change this). Here’s a link if you’re interested. Why Angular for mobile? Developers are using Ionic and Angular power a good chunk of the app stores. With everything Angular has to offer in terms of developer experience, tooling for fast apps, and its ecosystem of third-party libraries (like Ionic), it’s no wonder developers choose it as their framework of choice. From solo devs, to small shops, to large organizations like the UK’s National Health Service, Boehringer Ingelheim and BlueCross Blue Shield, these organizations have selected Angular and Ionic for their tech stack, and you should feel confident to do so as well. Web vs. App Stores If Ionic and Angular are based on web technologies, why even target the app stores at all? With Progressive Web Apps gaining traction and the web becoming a more capable platform, why not just deploy to the web and skip the app stores? Well it turns out that the app stores provide a lot of value that products need. Features like Push Notifications, File System API, etc are starting to come to the web, but they are still not fully available in every browser. Building a hybrid app with Ionic and Angular can allow developers to use these features and gracefully fallback when these APIs are not available. Discoverability is also another big factor here. While we can search for anything on the web, having users discover your app can be challenging. With the app stores they regularly promote new apps and highly rated apps as well. This can make the difference between a successful product and one that fails. The Best of Both Worlds The web is still an important step to shipping a great app. But when developers want to target different platforms, having the right libraries in place can make all the difference. For instance, if you want to build a fast and performant web app, Angular is an excellent choice and is statistically the choice many developers make. On the other hand, if you want to bring that app experience to a mobile device, with full access to every native feature and offline capabilities, then a hybrid mobile app using Angular plus a mobile SDK like Ionic is the way to go. Either way, your investment in Angular will serve you well. And you’ll be in good company, with millions of devs and nearly a million apps right alongside you. Ionic + Angular: Powering the App store and the web was originally published in Angular Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read more
  • 0
  • 0
  • 20647

article-image-you-can-now-use-webassembly-from-net-with-wasmtime
Vincy Davis
05 Dec 2019
3 min read
Save for later

You can now use WebAssembly from .NET with Wasmtime!

Vincy Davis
05 Dec 2019
3 min read
Two months ago, ASP.NET Core 3.0 was released with an updated version of the Blazor framework. This framework allows the building of interactive client-side web UI with .NET. Yesterday, Peter Huene, a staff research engineer at Mozilla shared his experience of using Wasmtime with .NET. He affirms that using this format will enable developers to programmatically load and execute WebAssembly code directly from their .NET programs. Key benefits of using WebAssembly from .NET with Wasmtime Share more code across platforms Although .NET Core enables cross-platform use, developers find it difficult to use a native library as .Net Core requires native interop and a platform-specific build for each supported platform. However, if the native library is compiled to WebAssembly, then the same WebAssembly module can be used across many different platforms and programming environments, including .NET. Thus a more simplified distribution of the library and applications will allow developers to share more codes across platforms. Securely isolate untrusted code According to Huene, “The .NET Framework attempted to sandbox untrusted code with technologies such as Code Access Security and Application Domains, but ultimately these failed to properly isolate untrusted code.” This resulted in Microsoft deprecating its use for sandboxing and removing it from .NET Core. Huene asserts that since WebAssembly is designed for the web, its module will enable users to call the external explicitly imported function from a host environment and will also give access to only a region of memory given to it by the host. With WebAssembly, users can also leverage this design to sandbox code in a .NET program. Improved interoperability with interface types In August this year, WebAssembly’s interface types permitted users to run WebAssembly with many programming languages like Python, Ruby, and Rust. This interoperability reduced the amount of glue code which was necessary for passing complex types between the hosting application and a WebAssembly module. According to Huene, if Wasmtime implements official support for interface types for .NET API in the future, it will enable a seamless exchange of complex types between WebAssembly and .NET. Users have liked the approach of using WebAssembly from .NET with Wasmtime. https://twitter.com/mattferderer/status/1202276545840197633 https://twitter.com/seangwright/status/1202488332011347968 To know how Peter Huene used WebAssembly from .NET, check out his demonstrations on the Mozilla Hacks blog. Exploring .Net Core 3.0 components with Mark J. Price, a Microsoft specialist .NET Framework API Porting Project concludes with .NET Core 3.0 Wasmer’s first Postgres extension to run WebAssembly is here! Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module Introducing SwiftWasm, a tool for compiling Swift to WebAssembly
Read more
  • 0
  • 0
  • 20570

article-image-node-js-v10-12-0-current-released
Sugandha Lahoti
11 Oct 2018
4 min read
Save for later

Node.js v10.12.0 (Current) released

Sugandha Lahoti
11 Oct 2018
4 min read
Node.js v10.12.0 was released, yesterday, with notable changes to assert, cli, crypto, fs, and more. However, the Node.js API is still somewhat changing, and as it matures, certain parts are more reliable than others. Hence throughout the v10.12.0 documentation are indications of a section's stability. Let’s look at the notable changes which are stable. Assert module Changes have been made to assert. The assert module provides a simple set of assertion tests that can be used to test invariants. It comprises of a strict mode and a legacy mode, although it is recommended to only use strict mode. In Node.js v10.12.0, the diff output is now improved by sorting object properties when inspecting the values that are compared with each other. Changes to cli The command line interface in Node.js v10.12.0 has two improvements: The options parser now normalizes _ to - in all multi-word command-line flags, e.g. --no_warnings has the same effect as --no-warnings. It also includes bash completion for the node binary. Users can generate a bash completion script with run node --completion-bash. The output can be saved to a file which can be sourced to enable completion. Crypto Module The crypto module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions. In Node.js v10.12.0, crypto adds support for PEM-level encryption. It also supports API asymmetric key pair generation. The new methods crypto.generateKeyPair and crypto.generateKeyPairSync can be used to generate public and private key pairs. The API supports RSA, DSA and EC and a variety of key encodings (both PEM and DER). Improvements to file system The fs module provides an API for interacting with the file system in a manner closely modeled around standard POSIX functions. Node.js v10.12.0 adds a recursive option to fs.mkdir and fs.mkdirSync. On setting this option to true, non-existing parent folders will be automatically created. Updates to Http/2 The http2 module provides an implementation of the HTTP/2 protocol. The new node.js version adds support for a 'ping' event to Http2Session that is emitted whenever a non-ack PING is received. Support is also added for the ORIGIN frame.  Also, nghttp2 is updated to v1.34.0. This adds RFC 8441 extended connect protocol support to allow the use of WebSockets over HTTP/2. Changes in module In the Node.js module system, each file is treated as a separate module. Module has also been updated in v10.12.0. It adds module.createRequireFromPath(filename). This new method can be used to create a custom require function that will resolve modules relative to the filename path. Improvements to process The process object is a global that provides information about, and control over, the current Node.js process. Process adds a 'multipleResolves' process event that is emitted whenever a Promise is attempted to be resolved multiple times. Updates to url Node.js v10.12.0 adds url.fileURLToPath(url) and url.pathToFileURL(path). These methods can be used to correctly convert between file: URLs and absolute paths. Changes in Utilities The util module is primarily designed to support the needs of Node.js' own internal APIs. The changes in Node.js v10.12.0 include: A new sorted option is added to util.inspect(). If set to true, all properties of an object and Set and Map entries will be sorted in the returned string. If set to a function, it is used as a compare function. The util.instpect.custom symbol is now defined in the global symbol registry as Symbol.for('nodejs.util.inspect.custom'). Support for BigInt numbers in util.format() are also added. Improvements in V8 API The V8 module exposes APIs that are specific to the version of V8 built into the Node.js binary. A number of V8 C++ APIs in v10.12.0 have been marked as deprecated since they have been removed in the upstream repository. Replacement APIs are added where necessary. Changes in Windows The Windows msi installer now provides an option to automatically install the tools required to build native modules. You can find the list of full changes on the Node.js Blog. Node.js and JS Foundation announce intent to merge; developers have mixed feelings. Node.js announces security updates for all their active release lines for August 2018. Deploying Node.js apps on Google App Engine is now easy.
Read more
  • 0
  • 0
  • 20551
article-image-cloudflare-and-google-chrome-add-http-3-and-quic-support-mozilla-firefox-soon-to-follow-suit
Bhagyashree R
30 Sep 2019
5 min read
Save for later

Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit

Bhagyashree R
30 Sep 2019
5 min read
Major web companies are adopting HTTP/3, the latest iteration of the HTTP protocol, in their experimental as well as production systems. Last week, Cloudflare announced that its edge network now supports HTTP/3. Earlier this month, Google’s Chrome Canary added support for HTTP/3 and Mozilla Firefox will soon be shipping support in a nightly release this fall. The ‘curl’ command-line client also has support for HTTP/3. In an announcement, Cloudflare shared that customers can turn on HTTP/3 support for their domains by enabling an option in their dashboards. “We’ve been steadily inviting customers on our HTTP/3 waiting list to turn on the feature (so keep an eye out for an email from us), and in the coming weeks we’ll make the feature available to everyone,” the company added. Last year, Cloudflare announced preliminary support for QUIC and HTTP/3. Customers could also join a waiting list to try QUIC and  HTTP/3 as soon as they become available. Those customers who are on the waiting list and have received an email from Cloudflare can enable the support by flipping the switch from the "Network" tab on the Cloudflare dashboard. Cloudflare further added, “We expect to make the HTTP/3 feature available to all customers in the near future.” Cloudflare’s HTTP/3 and QUIC support is backed by quiche. It is an implementation of the QUIC transport protocol and HTTP/3 written in Rust. It provides a low-level API for processing QUIC packets and handling connection state. Why HTTP/3 is introduced HTTP 1.0 required the creation of a new TCP connection for each request/response exchange between the client and the server, which resulted in latency and scalability issues. To resolve these issues, HTTP/1.1 was introduced. It included critical performance improvements such as keep-alive connections, chunked encoding transfers, byte-range requests, additional caching mechanisms, and more. The keep-alive or persistent connections allowed clients to reuse TCP connections. A keep-alive connection eliminated the need to constantly perform the initial connection establishment step. It also reduced the slow start across multiple requests. However, there were still some limitations. Multiple requests were able to share a single TCP connection, but they still needed to be serialized on after the other. This meant that the client and server could execute only a single request/response exchange at a time for each connection. HTTP/2 tried to solve this problem by introducing the concept of HTTP streams. This allowed the transmission of multiple requests/responses over the same connection at the same time. However, the drawback here is that in case of network congestion all requests and responses will be equally affected by packet loss, even if the data that is lost only concerns a single request. HTTP/3 aims to address the problems in the previous versions of HTTP.  It uses a new transport protocol called Quick UDP Internet Connections (QUIC) instead of TCP. The QUIC transport protocol comes with features like stream multiplexing and per-stream flow control. Here’s a diagram depicting the communication between client and server using QUIC and HTTP/3: Source: Cloudflare HTTP/3 provides reliability at the stream level and congestion control across the entire connection. QUIC streams share the same QUIC connection so no additional handshakes are required. As QUIC streams are delivered independently, packet loss affecting one stream will not affect the others. QUIC also combines the typical three-way TCP handshake with TLS 1.3 handshake to provide. This provides users encryption and authentication by default and enables faster connection establishment. “In other words, even when a new QUIC connection is required for the initial request in an HTTP session, the latency incurred before data starts flowing is lower than that of TCP with TLS,” Cloudflare explains. On Hacker News, a few users discussed the differences between HTTP/1, HTTP/2, and HTTP/3. Comparing the three a user commented, “Not aware of benchmarks, but specification-wise I consider HTTP2 to be a regression...I'd rate them as follows: HTTP3 > HTTP1.1 > HTTP2 QUIC is an amazing protocol...However, the decision to make HTTP2 traffic go all through a single TCP socket is horrible and makes the protocol very brittle under even the slightest network delay or packet loss...Sure it CAN work better than HTTP1.1 under ideal network conditions, but any network degradation is severely amplified, to a point where even for traffic within a datacenter can amplify network disruption and cause an outage. HTTP3, however, is a refinement on those ideas and gets pretty much everything right afaik.” Some expressed that the creators of HTTP/3 should also focus on the “real” issues of HTTP including proper session support and getting rid of cookies. Others appreciated this step saying, “It's kind of amazing seeing positive things from monopolies and evergreen updates. These institutions can roll out things fast. It's possible in hardware too-- remember Bell Labs in its hay days?” These were some of the advantages HTTP/3 and QUIC provide over HTTP/2. Read the official announcement by Cloudflare to know more in detail. Cloudflare plans to go public; files S-1 with the SEC Cloudflare finally launches Warp and Warp Plus after a delay of more than five months Cloudflare RCA: Major outage was a lot more than “a regular expression went bad”
Read more
  • 0
  • 0
  • 20537

article-image-apache-flink-1-9-0-releases-with-fine-grained-batch-recovery-state-processor-api-and-more
Fatema Patrawala
26 Aug 2019
5 min read
Save for later

Apache Flink 1.9.0 releases with Fine-grained batch recovery, State Processor API and more

Fatema Patrawala
26 Aug 2019
5 min read
Last week the Apache Flink community announced the release of Apache Flink 1.9.0. The Flink community defines the project goal as “to develop a stream processing system to unify and power many forms of real-time and offline data processing applications as well as event-driven applications.” In this release, they have made a huge step forward in that effort, by integrating Flink’s stream and batch processing capabilities under a single, unified runtime. There are significant features in this release, namely batch-style recovery for batch jobs and a preview of the new Blink-based query engine for Table API and SQL queries. The team also announced the availability of the State Processor API, one of the most frequently requested features that enables users to read and write savepoints with Flink DataSet jobs. Additionally, Flink 1.9 includes a reworked WebUI and previews of Flink’s new Python Table API and it is integrated with the Apache Hive ecosystem. Let us take a look at the major new features and improvements: New Features and Improvements in Apache Flink 1.9.0 Fine-grained Batch Recovery The time to recover a batch (DataSet, Table API and SQL) job from a task failure is significantly reduced. Until Flink 1.9, task failures in batch jobs were recovered by canceling all tasks and restarting the whole job, i.e, the job was started from scratch and all progress was voided. With this release, Flink can be configured to limit the recovery to only those tasks that are in the same failover region. A failover region is the set of tasks that are connected via pipelined data exchanges. Hence, the batch-shuffle connections of a job define the boundaries of its failover regions. State Processor API Up to Flink 1.9, accessing the state of a job from the outside was limited to the experimental Queryable State. In this release the team introduced a new, powerful library to read, write and modify state snapshots using the batch DataSet API. In practice, this means: Flink job state can be bootstrapped by reading data from external systems, such as external databases, and converting it into a savepoint. State in savepoints can be queried using any of Flink’s batch APIs (DataSet, Table, SQL), for example to analyze relevant state patterns or check for discrepancies in state that can support application auditing or troubleshooting. The schema of state in savepoints can be migrated offline, compared to the previous approach requiring online migration on schema access. Invalid data in savepoints can be identified and corrected. The new State Processor API covers all variations of snapshots: savepoints, full checkpoints and incremental checkpoints. Stop-with-Savepoint Cancelling with a savepoint is a common operation for stopping/restarting, forking or updating Flink jobs. However, the existing implementation did not guarantee output persistence to external storage systems for exactly-once sinks. To improve the end-to-end semantics when stopping a job, Flink 1.9 introduces a new SUSPEND mode to stop a job with a savepoint that is consistent with the emitted data. You can suspend a job with Flink’s CLI client as follows: bin/flink stop -p [:targetDirectory] :jobId The final job state is set to FINISHED on success, allowing users to detect failures of the requested operation. Flink WebUI Rework After a discussion about modernizing the internals of Flink’s WebUI, this component was reconstructed using the latest stable version of Angular — basically, a bump from Angular 1.x to 7.x. The redesigned version is the default in Apache Flink 1.9.0, however there is a link to switch to the old WebUI. Preview of the new Blink SQL Query Processor After the donation of Blink to Apache Flink, the community worked on integrating Blink’s query optimizer and runtime for the Table API and SQL. The team refactored the monolithic flink-table module into smaller modules. This resulted in a clear separation of well-defined interfaces between the Java and Scala API modules and the optimizer and runtime modules. Other important changes in this release: The Table API and SQL are now part of the default configuration of the Flink distribution. Previously, the Table API and SQL had to be enabled by moving the corresponding JAR file from ./opt to ./lib. The machine learning library (flink-ml) has been removed in preparation for FLIP-39. The old DataSet and DataStream Python APIs have been removed in favor of FLIP-38. Flink can be compiled and run on Java 9. Note: that certain components interacting with external systems (connectors, filesystems, reporters) may not work since the respective projects may have skipped Java 9 support. The binary distribution and source artifacts for this release are now available via the Downloads page of the Flink project, along with the updated documentation. Flink 1.9 is API-compatible with previous 1.x releases for APIs annotated with the @Public annotation. You can review the release notes to know about the detailed list of changes and new features to upgrade Flink setup to Flink 1.9.0. Apache Flink 1.8.0 releases with finalized state schema evolution support Apache Flink founders data Artisans could transform stream processing with patent-pending tool Apache Flink version 1.6.0 released!
Read more
  • 0
  • 0
  • 20447

article-image-brave-1-0-releases-with-focus-on-user-privacy-crypto-currency-centric-private-ads-and-payment-platform
Fatema Patrawala
14 Nov 2019
5 min read
Save for later

Brave 1.0 releases with focus on user privacy, crypto currency-centric private ads and payment platform

Fatema Patrawala
14 Nov 2019
5 min read
Yesterday, Brave, the company co-founded by ex-Mozilla CEO, Brendan Eich, launched version 1.0 of its browser for Windows, macOS, Linux, Android and iOS. In a browser market where users have to compromise on their privacy, Brave is positioning itself as a fast option that preserves users’ privacy with strong default settings, as well as a crypto currency-centric private ads and payment platform that allows users to reward content creators. “Surveillance capitalism has plagued the Web for far too long and we’ve reached a critical inflection point where privacy-by-default is no longer a nice-to-have, but a must-have. Users, advertisers, and publishers have finally had enough, and Brave is the answer. Brave 1.0 is the browser reimagined, transforming the Web to put users first with a private, browser-based ads and payment platform. With Brave, the Web can be a rewarding experience for all, without users paying with their privacy.” said Brendan Eich, co-founder and CEO of Brave Software. “Either we all accept the $330 billion ad-tech industry treating us as their products, exploiting our data, piling on more data breaches and privacy scandals, and starving publishers of revenue; or we reject the surveillance economy and replace it with something better that works for everyone. That’s the inspiration behind Brave,” he added. The company also announced last month that Brave has about 8 million monthly active users. Brave offers a privacy-first approach to its users that natively blocks trackers, invasive ads, and device fingerprinting. This leads to substantial improvements in speed, privacy, security, performance, and battery life. It has default settings to block phishing, malware, and malvertising. Embedded plugins, which have proven to be an ongoing security risk, are disabled by default in Brave. Browsing data always stays private and on the user’s device, which means Brave will never see or store the data on its servers or sell user data to third-parties. Brave 1.0 key features Additionally Brave 1.0 offers some unique features to its users: Brave Rewards program to fund open web – By activating Brave Reward, users can support their favorite publishers and content creators and integrate Brave wallet on both desktop and mobile. This feature allows users to send Basic Attention Tokens (BAT) as tips for great content, either directly as they browse or by defaulting to recurring monthly payments to continuously support websites you visit frequently. There are over 300,000 verified websites on-boarded on Brave for this program including The Washington Post, The Guardian, Wikipedia, YouTube, Twitch, Twitter, GitHub and more. Brave Ads compensate users for their attention – Brave has a new blockchain-based advertising model that enables privacy and gives 70% of its revenue share in the form of Basic Attention Tokens (BAT) to users who view the Brave ads. These ads are a part of private ad network and Brave Rewards program. It allows users to opt-in to view relevant privacy-preserving ads in exchange for earning BAT. When users opt into Brave Rewards, Brave ads are enabled by default. As per the content viewed by a user, ad matching happens directly on the user’s device, so their data is never sent to anyone, and they see rewarding ads without mass surveillance. Users can also transfer their earned BAT from the wallet and convert into digital assets and fiat currencies, but they need to complete the verification process with Uphold, a digital money platform. Brave Shields for automatic ad and tracker blocking – Brave Shields, this feature is enabled by default and is customizable from the address bar. It blocks invasive third-party ads, trackers, and autoplay videos immediately – without needing to install any additional programs. On Hacker News, users have appreciated the way Brave browser operates and rewards its content consumers as well as the creators. One of them has explained its functioning in detail, “I've been using Brave rewards, both as a user and a content maker. It's really great, and I feel this may be a reasonable alternative to the invasive trackers+ads we have today. For the uninitiated, Brave lets users opt-in to Brave rewards: - You set your browser to reward content creators with Basic Attention Token (BAT). You set a budget (e.g. 10 BAT/month), and Brave distributes it the sites you use most, e.g. if you watch a particular YouTube channel 30% of your browsing time, it will send 30% of 10 BAT each month to that content creator. - As a user, you can get paid in BAT. You tell Brave if you're willing to see ads, and how often. If so, you get paid in BAT, which you can then distribute to content creators. Brave ads are different: rather than intrusive in-page ads, Brave ads show up as a notification in your operating system outside of the page. This prevents slow downs of the page, keeping your browsing focused, while still allowing support of content creators. And of course, Brave ads are optional and opt-in.” You can download Brave for free, by visiting official Brave page, Google Playstore or the App Store. Google is circumventing GDPR, reveals Brave’s investigation for the Authorized Buyers ad business case Brave ad-blocker gives 69x better performance with its new engine written in Rust Edge, Chrome, Brave share updates on upcoming releases, recent milestones, and more at State of Browsers event Brave launches its Brave Ads platform sharing 70% of the ad revenue with its users Brave Privacy Browser has a ‘backdoor’ to remotely inject headers in HTTP requests: HackerNews
Read more
  • 0
  • 0
  • 20400
article-image-gnome-team-adds-fractional-scaling-support-in-the-upcoming-gnome-3-32
Natasha Mathur
05 Mar 2019
2 min read
Save for later

GNOME team adds Fractional Scaling support in the upcoming GNOME 3.32

Natasha Mathur
05 Mar 2019
2 min read
The GNOME team released beta version 3.32 of GNOME, a free and open source GUI for the Linux computer operating system, last month. GNOME 3.32 is set to release on 13th March 2019. Now, the GNOME team has also added the much-awaited support for fractional scaling to the GNOME 3.32, reports Phoronix. The GNOME 3.32 beta release explored major improvements, bug fixes, and other changes. Earlier GNOME would allow the users to scale windows by integral factors (typically 2). But this was very limiting as there are many systems between the dpi ranges that are effective for scale factor 2, or unscaled. In order to improve this, GNOME then allowed its users to scale by fractional values, e.g. 3/2, or 2/1.3333. This, in turn, allows its users more control over the UI scaling as opposed to the previous integer based scaling of 2, 3, etc. The newly added support for Fractional Scaling in the upcoming GNOME version 3.32 will help enhance the user experience with the modern HiDPI displays. The GNOME Shell changes along with the Mutter changes have also been merged ahead of GNOME version 3.32.0. GNOME version 3.32 says goodbye to application menus Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes GNOME 3.30 released with improved Desktop performance, Screen Sharing, and more
Read more
  • 0
  • 0
  • 20290

article-image-react-16-5-0-is-now-out-with-a-new-package-for-scheduling-support-for-devtools-and-more
Bhagyashree R
07 Sep 2018
3 min read
Save for later

React 16.5.0 is now out with a new package for scheduling, support for DevTools, and more!

Bhagyashree R
07 Sep 2018
3 min read
React announced its monthly release yesterday, React 16.5.0. In this release they have improved warning messages, added support for React DevTools Profiler in React DOM, and done some bug fixes. Updates in React A Dev warning is shown if React.forwardRef render function doesn't take exactly two arguments. A more improved message is shown if someone passes an element to createElement by mistake. The onRender function will be called after mutations and commitTime reflects pre-mutation time. Updates in React DOM New additions Support for React DevTools Profiler is added. The react-dom/profiling entry point is added for profiling in production. The onAuxClick event is added for browsers that support it. The movementX and movementY fields are added to mouse events. The tangentialPressure and twist fields are added to pointer events. Support for passing booleans to the focusable SVG attribute. Improvements Improved component stack for the folder/index.js naming convention. Improved warning when using getDerivedStateFromProps without initialized state. Improved invalid textarea usage warning. Electrons <webview> tag are now allowed without warnings. Bug fixes Fixed incorrect data in compositionend event when typing Korean on IE11. Avoid setting empty values on submit and reset buttons. The onSelect event not being triggered after drag and drop. The onClick event not working inside a portal on iOS. A performance issue when thousands of roots are re-rendered. gridArea will be treated as a unitless CSS property. The checked attribute is not getting initially set on the input. A crash when using dynamic children in the option tag. Updates in React DOM Server A crash is fixed that happens during server render in react 16.4.1 Fixes a crash when setTimeout is missing This release fixes a crash with nullish children when using dangerouslySetInnerHtml in a selected option. Updates in React Test Renderer and Test Utils A Jest-specific ReactTestUtils.mockComponent() helper is now deprecated. A warning is shown when a React DOM portal is passed to ReactTestRenderer. Improvements in TestUtils error messages for bad first argument. Updates in React ART Support for DevTools is added New package for scheduling (experimental) The ReactDOMFrameScheduling module will be pulled out in a separate package for cooperatively scheduling work in a browser environment. It's used by React internally, but its public API is not finalized yet. To see the complete list of updates in React 16.5.0, head over to their GitHub repository. React Next React Native 0.57 coming soon with new iOS WebViews Implementing React Component Lifecycle methods [Tutorial] Understanding functional reactive programming in Scala [Tutorial]
Read more
  • 0
  • 0
  • 20158
Modal Close icon
Modal Close icon