Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Server-Side Web Development

85 Articles
article-image-meet-carlo-a-web-rendering-surface-for-node-applications-by-the-google-chrome-team
Bhagyashree R
02 Nov 2018
2 min read
Save for later

Meet Carlo, a web rendering surface for Node applications by the Google Chrome team

Bhagyashree R
02 Nov 2018
2 min read
Yesterday, the Google Chrome team introduced Carlo, a web rendering surface for Node applications. Carlo provides rich rendering capabilities powered by the Google Chrome browser to Node applications. Using Puppeteer it is able to communicate with the locally installed browser instance. Puppeteer is also a Google Chrome project that comes with a high-level API to control Chrome or Chromium over the DevTools Protocol. Why Carlo is introduced? Carlo aims to show how the locally installed browser can be used with Node out-of-the-box. The advantage of using Carlo over Electron is that Node v8 and Chrome v8 engines are decoupled in Carlo. This provides a maintainable model that allows independent updates of the underlying components. In short, Carlo gives you more control over bundling. What you can do with Carlo? Carlo enables you to create hybrid applications that use Web stack for rendering and Node for capabilities. You can do the following with it: Using the web rendering stack, you can visualize dynamic state of your Node applications. Expose additional system capabilities accessible from Node to your web applications. Package your application into a single executable using the command-line interface, pkg. How does it work? It’s working involve three steps: First, Carlo checks whether Google Chrome is installed locally or not It then launches Google Chrome and establishes a connection to it over the process pipe Finally, exposes high-level API for rendering in Chrome In case of those users who do not have Chrome installed, Carlo prints an error message. It supports all Chrome Stable channel, versions 70.* and Node v7.6.0 onwards. You can install and get started with it by executing the following command: npm i carlo Read the full description on Carlo’s GitHub repository. Node v11.0.0 released npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 17581

article-image-welcome-express-gateway-1-11-0-a-microservices-api-gateway-on-express-js
Bhagyashree R
24 Aug 2018
2 min read
Save for later

Welcome Express Gateway 1.11.0, a microservices API Gateway on Express.js

Bhagyashree R
24 Aug 2018
2 min read
Express Gateway 1.11.0 has been released after adding an important feature for the proxy policy and some bug fixes. Express Gateway is a simple, agnostic, organic, and portable, microservices API Gateway built on Express.js. What is new in this version? Additions New parameter called stripPath: Support for a new parameter called stripPath has been added to the Proxy Policy for Express Gateway. Its default value is false. You can now completely own both the URL space of your backend server as well the one exposed by Express Gateway. Official Helm chart: An official Helm chart has been added that enables you to install Express Gateway on your Rancher or Kubernetes Cluster with a single command. Bug Fixes The base condition schema is now correctly returned by the /schemas Admin API Endpoint so that the external clients can use it and resolve its references correctly. Previously, invalid configuration could be sent to the gateway through the Admin API when using Express Gateway in production. The gateway was correctly validating the gateway.config content, but it wasn't validating all the policies inside it. This bug fix was done to  make sure when an Admin API call that is modifying the configuration is done, the validation should be triggered so that we do not persist on disk a broken configuration file. Fixed a missing field in oauth2-introspect JSON Schema. For maintaining consistency, the keyauth schema name is now correctly named key-auth. Miscellaneous changes Unused migration framework has been removed. The X-Powered-By header is now disabled for security reasons. The way of starting Express Gateway in official Docker file is changed. Express Gateway is not wrapped in a bash command before being run. The reason is that the former command allocates an additional /bin/sh process, the latter does not. In this article we looked through some of the updates introduced in Express Gateway 1.11.0. To know more on this new update head over to their GitHub repo. API Gateway and its need Deploying Node.js apps on Google App Engine is now easy How to build Dockers with microservices
Read more
  • 0
  • 0
  • 17579

article-image-how-deliveroo-migrated-from-ruby-to-rust-without-breaking-production
Bhagyashree R
15 Feb 2019
3 min read
Save for later

How Deliveroo migrated from Ruby to Rust without breaking production

Bhagyashree R
15 Feb 2019
3 min read
Yesterday, the Deliveroo engineering team shared their experience about how they migrated their Tier 1 service from Ruby to Rust without breaking production. Deliveroo is an online food delivery company based in the United Kingdom. Why Deliveroo decided to part ways from Ruby for the Dispatcher service? The Logistics team at Deliveroo uses a service called Dispatcher. This service optimally offers an order to the rider, and it does this with the help of a timeline for each rider. This timeline helps in predicting where riders will be at a certain point of time. Knowing this information allows to efficiently suggest a rider for an order. Building these timelines requires a lot of computation. Though these computations are quick, they are a lot in number. The Dispatcher service was first written in Ruby as it was the company’s preferred language in the beginning. Earlier, it was performing fine because the business was not as big it is now. With time, when Deliveroo started growing, the number of orders increased. This is why the Dispatch service started taking much longer than before. Why they chose Rust as the replacement for Ruby? Instead of writing the whole thing in Rust, the team decided to identify the bottlenecks that were slowing down the Dispatcher service and rewrite them in a different programming language (Rust). They concluded that it would be easier if they built some sort of native extension written in Rust and make it work with the current code written in Ruby. The team chose Rust because it provides high performance than C and is memory safe. Rust also allowed them to build dynamic libraries, which can be later loaded into Ruby. Additionally, some of their team members also had experience with Rust and one part of the Dispatcher was already in Rust. How they migrated from Ruby to Rust? There are two options using which you can call Rust from Ruby. One, by writing a dynamic library in Rust with extern "C" interface and calling it using FFI. Second, writing a dynamic library, but using the Ruby API to register methods, so that you can call them from Ruby directly, just like any other Ruby code. The Deliveroo team chose the second approach of using Ruby API, as there are many libraries available to make it easier for them, for instance, ruru, rutie, and Helix. The team decided to use Rutie, which is a recent fork of Ruru and is under active development. The team planned to gradually replace all parts of the Ruby Dispatcher with Rust. They began the migration by replacing with Rust classes which did not have any dependencies on other parts of the Dispatcher and adding feature flags. As the API of both Ruby and Rust classes implementation were quite similar, they were able to use the same tests. With the help of Rust, the overall dispatch time was reduced significantly. For instance, in one of their larger zones, it dropped from ~4 sec to 0.8 sec. Out of these 0.8 seconds, the Rust part only consumed 0.2 seconds. Read the post shared by Andrii Dmytrenko, a Software Engineer at Deliveroo, for more details. Introducing RustPython, a Python 3 interpreter written in Rust Rust 1.32 released with a print debugger and other changes How has Rust and WebAssembly evolved in 2018
Read more
  • 0
  • 0
  • 17414

article-image-javalin-2-0-rc3-released-with-major-updates
Bhagyashree R
06 Aug 2018
3 min read
Save for later

Javalin 2.0 RC3 released with major updates!

Bhagyashree R
06 Aug 2018
3 min read
Javalin is a web framework for Java and Kotlin which is simple, lightweight, interoperable, and flexible. With the major changes introduced in the codebase, the team has now announced the release of Javalin 2.0 RC3. Some of the updates include removal of some abstraction layers, using Set instead of List, removal of CookieBuilder, Javalin lambda replacing Javalin Jetty, and more. Updates in the Javalin 2.0 RC3 Package structure improvements The following table lists the packages whose structure have been updated in this release: Javalin 1.7 Javalin 2.0 RC3 io.javalin.embeddedserver.jetty.websocket io.javalin.websocket io.javalin.embeddedserver.Location io.javalin.staticfiles.Location io.javalin.translator.json.JavalinJsonPlugin io.javalin.json.JavalinJson io.javalin.translator.json.JavalinJacksonPlugin io.javalin.json.JavalinJackson io.javalin.translator.template.JavalinXyzPlugin io.javalin.rendering.JavalinXyz io.javalin.security.Role.roles io.javalin.security.SecurityUtil.roles io.javalin.ApiBuilder io.javalin.apibuilder.ApiBuilder io.javalin.ApiBuilder.EndpointGrooup io.javalin.apibuilder.EndpointGrooup Changes in the server defaults Earlier, when we wanted to customize our embedded server, we used to write the following: app.embeddedServer(new EmbeddedJettyFactory(() -> new Server())) // v1 Now with the removal of embedded server abstraction, we can directly write this: app.server(() -> new Server()) // v2 Since the static method Javalin.start(port) has been removed, use Javalin.create().start(0) instead. defaultCharset() method has been removed The following are enabled by default: Dynamic gzip, turn it off with disableDynamicGzip() Request-caching is now limited to 4kb Server now has a LowResourceMonitor attached URLs are now case-insensitive by default, meaning Javalin will treat /path and /Path as the same URL. This can be disabled with app.enableCaseSensitiveUrls(). Javalin lambda replaces Jetty WebSockets Since Jetty WebSockets have limited functionality, it is now replaced with the Javalin lambda WebSockets. AccessManager This is an interface used to set per-endpoint authentication and authorization. Use Set instead of List. It now runs for every single request, but the default-implementation does nothing. Context Context is the object, which provides you with everything needed to handle an http-request. The following updates are introduced in Context: ctx.uri() has been removed, it was a duplicate of ctx.path() ctx.param() is replaced with ctx.pathParam() ctx.xyzOrDefault("key") are changed to ctx.xyz("key", "default") ctx.next() has been removed ctx.request() is now ctx.req ctx.response() is now ctx.res All ctx.renderXyz methods are now just ctx.render(), since the correct engine is chosen based on extension ctx.charset(charset) has been removed You can use the Cookie class in place of CookieBuilder, as it is now removed Now List<T> is returned instead of Array<T> Things that used to return nullable collections now return empty collections instead Kotlin users can now do ctx.body<MyClass>() to deserialize json In this article we looked at some of the major updates in Javalin 2.0. To know more, head over to their GitHub repository. Kotlin 1.3 M1 arrives with coroutines, and new experimental features like unsigned integer types Top frameworks for building your Progressive Web Apps (PWA) Kotlin/Native 0.8 recently released with safer concurrent programming
Read more
  • 0
  • 0
  • 17394

article-image-introducing-cycle-js-a-functional-and-reactive-javascript-framework
Bhagyashree R
19 Nov 2018
3 min read
Save for later

Introducing Cycle.js, a functional and reactive JavaScript framework

Bhagyashree R
19 Nov 2018
3 min read
Cycle.js is a functional and reactive JavaScript framework for writing predictable code. The apps built with Cycle.js consist of pure functions, which means it only takes inputs and generates predictable outputs, without performing any I/O effects. What is the basic concept behind Cycle.js? Cycle.js considers your application as a pure main() function. It takes inputs that are read effects (sources) from the external world and gives outputs that are write effects (sinks) to affect the external world. Drivers like plugins that handle DOM effects, HTTP effects, etc are responsible for managing these I/O effects in the external world. Source: Cycle.js The main() is built using Reactive Programming primitives that maximize separation of concerns and provides a fully declarative way of organizing your code. The dataflow in your app is clearly visible in the code, making it readable and traceable. Here are some of its properties: Functional and reactive As Cycle.js is functional and reactive, it allows developers to write predictable and separated code. Its building blocks are reactive streams from libraries like RxJS, xstream or Most.js. These greatly simplify code related to events, asynchrony, and errors. This application structure also separates concerns as all dynamic updates to a piece of data are co-located and impossible to change from outside. Simple and concise This framework is very easy to learn and get started with as it has very few concepts. Its core API has just one function, run(app, drivers). Apart from that, we have streams, functions, drivers, and a helper function to isolate scoped components. Its most of the building blocks are just JavaScript functions. Functional reactive streams are able to build complex dataflows with very few operations, which makes apps in Cycle.js very small and readable. Extensible and testable In Cycle.js, drivers are simple functions that take messages from sinks and call imperative functions. All I/O effects are done by the drivers, which means your application is just a pure function. This makes it very easy to swap the drivers around. Currently, there are drivers for React Native, HTML5 Notification, Socket.io, and so on. Also, with Cycle.js, testing is just a matter of feeding inputs and inspecting the output. Composable As mentioned earlier, a Cycle.js app, no matter how complex it is, is a function that can be reused in a larger Cycle.js app. Sources and sinks in these apps act as interfaces between the application and the drivers, but they are also the interface between a child component and its parent. Its components are not just GUI widgets like in other frameworks. You can make Web Audio components, network requests components, and others since the sources/sinks interface is not exclusive to the DOM. You can read more about Cycle.js on its official website. Introducing Howler.js, a Javascript audio library with full cross-browser support npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more InfernoJS v6.0.0, a React-like library for building high-performance user interfaces, is now out
Read more
  • 0
  • 0
  • 17216

article-image-laravel-5-7-released-with-support-for-email-verification-improved-console-testing
Prasad Ramesh
06 Sep 2018
3 min read
Save for later

Laravel 5.7 released with support for email verification, improved console testing

Prasad Ramesh
06 Sep 2018
3 min read
Laravel 5.7.0 has been released. The latest version of the PHP framework includes support for email verification, guest policies, dump-server, improved console testing, notification localization, and other changes. The versioning scheme in Laravel maintains the convention—paradigm.major.minor. Major releases are done every six months in February and August. The minor releases may be released every week without breaking any functionality. For LTS releases like Laravel 5.5, bug fixes are provided for two years and security fixes for three years. The LTS releases provide the longest support window. For general releases, bug fixes are done for 6 months and security fixes for a year. Laravel Nova Laravel Nova is a pleasant looking administration dashboard for Laravel applications. The primary feature of Nova is the ability to administer the underlying database records using Laravel Eloquent. Additionally, Nova supports filters, lenses, actions, queued actions, metrics, authorization, custom tools, custom cards, and custom fields. After the upgrade, when referencing the Laravel framework or its components from your application or package, always use a version constraint like 5.7.*, since major releases can have breaking changes. Email Verification Laravel 5.7 introduces an optional email verification for authenticating scaffolding included with the framework. To accommodate this feature, a column called email_verified_at timestamp has been added to the default users table migration that is included with the framework. Guest User Policies In the previous Laravel versions, authorization gates and policies automatically returned false for unauthenticated visitors to your application. Now you can allow guests to pass through authorization checks by declaring an "optional" type-hint or supplying a null default value for the user argument definition. Gate::define('update-post', function (?User $user, Post $post) {    // ... }); Symfony Dump Server Laravel 5.7 offers integration with the dump-server command via a package by Marcel Pociot. To get this started, first run the dump-server Artisan command: php artisan dump-server Once the server starts after this command, all calls to dump will be shown in the dump-server console window instead of your browser. This allows inspection of values without mangling your HTTP response output. Notification Localization Now you can send notifications in a locale other than the set current language. Laravel will even remember this locale if the notification is queued. Localization of many notifiable entries can also be achieved via the Notification facade. Console Testing Laravel 5.7 allows easy "mock" user input for console commands using the expectsQuestion method. Additionally, the exit code can be specified and the text that you expect to be the output via the console command using the assertExitCode and expectsOutput methods. These were some of the major changes covered in Laravel 5.7, for a complete list, visit the Laravel Release Notes. Building a Web Service with Laravel 5 Google App Engine standard environment (beta) now includes PHP 7.2 Perform CRUD operations on MongoDB with PHP
Read more
  • 0
  • 0
  • 17133
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-salesforce-open-sources-lightning-web-components-framework
Savia Lobo
30 May 2019
4 min read
Save for later

Salesforce open sources ‘Lightning Web Components framework’

Savia Lobo
30 May 2019
4 min read
Yesterday, the developers at Salesforce open sourced Lightning Web Components framework, a new JavaScript framework that leverages the web standards breakthroughs of the last five years. This will allow developers to contribute to the roadmap and also use the framework irrespective if they are building applications on Salesforce or on any other platform. The Lightning Web Components was first introduced in December 2018. The developers in their official blog post mention, “The last five years have seen an unprecedented level of innovation in web standards, mostly driven by the W3C/WHATWG and the ECMAScript Technical Committee (TC39): ECMAScript 6, 7, 8, 9 and beyond, Web components, Custom elements, Templates and slots, Shadow DOM, etc.” The introduction of Lightning Web Components framework has lead to a dramatic transformation of the web stack. Many features that required frameworks are now standard.   The framework was “born as a modern framework built on the modern web stack”, developers say. Lightning Web Components framework includes three key parts: The Lightning Web Components framework, the framework’s engine. The Base Lightning Components, which is a set of over 70 UI components all built as custom elements. Salesforce Bindings, a set of specialized services that provide declarative and imperative access to Salesforce data and metadata, data caching, and data synchronization. The Lightning Web Components framework doesn’t have dependencies on the Salesforce platform. However, Salesforce-specific services are built on top of the framework. The layered architecture means that one can now use the Lightning Web Components framework to build web apps that run anywhere. The benefits of this include: You only need to learn a single framework You can share code between apps. As Lightning Web Components is built on the latest web standards, you know you are using a cutting-edge framework based on the latest patterns and best practices. Many users said they are unhappy and that the Lightning Web Components framework is comparatively slow. One user wrote on HackerNews, “the Lightning Experience always felt non-performant compared to the traditional server-rendered pages. Things always took a noticeable amount of time to finish loading. Even though the traditional interface is, by appearance alone, quite traditional, as least it felt fast. I don't know if Lightning's problems were with poor performing front end code, or poor API performance. But I was always underwhelmed when testing the SPA version of Salesforce.” Another user wrote, “One of the bigger mistakes Salesforce made with Lightning is moving from purely transactional model to default-cached-no-way-to-purge model. Without letting a single developer to know that they did it, what are the pitfalls or how to disable it (you can't). WRT Lightning motivation, sounds like a much better option would've been supplement older server-rendered pages with some JS, update the stylesheets and make server language more useable. In fact server language is still there, still heavily used and still lacking expressiveness so badly that it's 10x slower to prototype on it rather than client side JS…” In support of Salesforce, a user on HackerNews explains why this Framework might be slow. He said, “At its core, Salesforce is a platform. As such, our customers expect their code to work for the long run (and backwards compatibility forever). Not owning the framework fundamentally means jeopardizing our business and our customers, since we can't control our future. We believe the best way to future-proof our platform is to align with standards and help push the web platform forward, hence our sugar and take on top of Web Components.” He further added, “about using different frameworks, again as a platform, allowing our customers to trivially include their framework choice of the day, will mean that we might end up having to load seven versions of react, five of Vue, 2 Embers .... You get the idea :) Outside the platform we love all the other frameworks (hence other properties might choose what it fits their use cases) and we had a lot of good discussions with framework owners about how to keep improving things over the last two years. Our goal is to keep contributing to the standards and push all the things to be implemented natively on the platform so we all get faster and better.” To know more about this news visit the Lightning Web Components Framework’s official website. Applying styles to Material-UI components in React [Tutorial] 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Github Sponsors: Could corporate strategy eat FOSS culture for dinner?
Read more
  • 0
  • 0
  • 17116

article-image-haproxy-introduces-stick-tables-for-server-persistence-threat-detection-and-collecting-metrics
Bhagyashree R
24 Sep 2018
3 min read
Save for later

HAProxy shares how you can use stick tables for server persistence, threat detection, and collecting metrics

Bhagyashree R
24 Sep 2018
3 min read
Yesterday, HAProxy published an article discussing stick tables, an in-memory storage. Introduced in 2010, it allows you to track client activities across requests, enables server persistence, and collects real-time metrics. It is supported in both the HAProxy Community and Enterprise Edition. You can think of stick tables as a type of key-value store. The key here represents what you track across requests, such as a client IP, and the values are the counters that, for the most part, HAProxy takes care of calculating for you. What are the common use cases of stick tables? StackExchange realized that along with its core functionality, server persistence, stick tables can also be used for many other scenarios. They sponsored its developments and now stick tables have become an incredibly powerful subsystem within HAProxy. Stick tables can be used in many scenarios; however, its main uses include: Server persistence Stick tables were originally introduced to solve the problem of server persistence. HTTP requests are stateless by design because each request is executed independently, without any knowledge of the requests that were executed before it. These tables can be used to store a piece of information, such as an IP address, cookie, or range of bytes in the request body, and associate it with a server. Next time when HAProxy sees new connections using the same piece of information, it will forward the request on to the same server. This way it can help in tracking user activities between one request and add a mechanism for storing events and categorizing them by client IP or other keys. Bot detection We can use stick tables to defend against certain types of bot threats. It finds its application in preventing request floods, login brute force attacks, vulnerability scanners, web scrapers, slow loris attacks, and many more. Collecting metrics With stick tables, you can collect metrics to understand what is going on in HAProxy, without enabling logging and having to parse the logs. In this scenario Runtime API is used, which can read and analyze stick table data from the command line, a custom script or executable program. You can visualize this data using any dashboard of your choice. You can also use the fully-loaded dashboard, which comes with HAProxy Enterprise Edition for visualizing stick table data. These were a few of the use cases where stick tables can be used. To get a clear understanding of stick tables and how they are used, check out the post by HAProxy. Update: Earlier the article said, "Yesterday (September 2018), HAProxy announced that they are introducing stick tables." This was incorrect as pointed out by a reader, stick tables have been around since 2010. The article is now updated to reflect the same.    Use App Metrics to analyze HTTP traffic, errors & network performance of a .NET Core app [Tutorial] How to create a standard Java HTTP Client in ElasticSearch Why is everyone going crazy over WebAssembly?
Read more
  • 0
  • 0
  • 16989

article-image-introducing-mint-a-new-http-client-for-elixir
Amrata Joshi
26 Feb 2019
2 min read
Save for later

Introducing Mint, a new HTTP client for Elixir

Amrata Joshi
26 Feb 2019
2 min read
Yesterday, the team at Elixir introduced Mint as their new low-level HTTP client that provides a small and functional core. It is connection based where each connection is a single structure with an associated socket belonging to the process that started the connection. Features of Mint Connections The HTTP connections of Mint are managed directly in the process that starts the connection. There is no connection pool which is used when a connection is opened. This helps users to build their own process structure that fits their application. Each connection has a single immutable data structure that the users can manage. Mint uses “active mode” sockets so the data and events from the socket are sent as messages to the process that started the connection. The user then passes the messages to the stream/2 function that further returns the updated connection and a list of “responses”. These responses get streamed back and the response is returned in partial response chunks. Process-less To many users, Mint may seem to be more cumbersome to use than other HTTP libraries. But by providing a low-level API without a predetermined process architecture, Mint gives flexibility to the user of the library. If a user writes GenStage pipelines, a pool of producers can fetch data from external sources via HTTP. With Mint, it is possible to have GenStage producer for managing its own connection while reducing overhead and simplifying the code. HTTP/1 and HTTP/2 The Mint.HTTP module has a single interface for both HTTP/1 and HTTP/2 connections which also performs version negotiation on HTTPS connections. Users can now specify HTTP version for choosing Mint.HTTP1 or Mint.HTTP2modules directly. Safe-by-default HTTPS When connecting with HTTPS, Mint performs certificate verification by default. Mint also uses an optional dependency on CAStore for providing certificates from Mozilla’s CA Certificate Store. Few users are happy about this news with  one user commenting on HackerNews, “I like that Mint keeps dependencies to a minimum.” Another user commented, “I'm liking the trend of designing runtime-behaviour agnostic libraries in Elixir.” To know more about this news, check out Mint’s official blog post. Elixir 1.8 released with new features and infrastructure improvements Elixir 1.7, the programming language for Erlang virtual machine, releases Elixir Basics – Foundational Steps toward Functional Programming  
Read more
  • 0
  • 0
  • 16915

article-image-will-putting-limits-on-how-much-javascript-is-loaded-by-a-website-help-prevent-user-resource-abuse
Bhagyashree R
31 Jan 2019
3 min read
Save for later

Will putting limits on how much JavaScript is loaded by a website help prevent user resource abuse?

Bhagyashree R
31 Jan 2019
3 min read
Yesterday, Craig Hockenberry, who is a Partner at The Iconfactory, reported a bug on WebKit, which focuses on adding a limit on how much JavaScript code a website can load to avoid resource abuse of user computers. Hockenberry feels that though content blocking has helped in reducing the resource abuse and hence providing better performance and better battery life, there are few downsides of using content blockers. His bug report said, “it's hurting many smaller sites that rely on advertising to keep the lights on. More and more of these sites are pleading to disable content blockers.” This results in collateral damage to smaller sites. As a solution to this, he suggested that we need to find a way to incentivize JavaScript developers who keep their codebase smaller and minimal. “Great code happens when developers are given resource constraints... Lack of computing resources inspires creativity”, he adds. As an end result, he believes that we can allow sites to show as many advertisements as they want, but keeping the overall size under a fixed amount. He believes that we can also ask users for permission by adding a simple dialog box, for example, "The site example.com uses 5 MB of scripting. Allow it?” This bug report triggered a discussion on Hacker News, and though few users agreed to his suggestion most were against it. Some developers mentioned that users usually do not read the dialogs and blindly click OK to get the dialog to go away. And, even if users read the dialog, they will not be knowing how much JavaScript code is too much. “There's no context to tell her whether 5MB is a lot, or how it compares to payloads delivered by similar sites. It just expects her to have a strong opinion on a subject that nobody who isn't a coder themselves would have an opinion about,” he added. Other ways to prevent JavaScript code from slowing down browsers Despite the disagreement, developers do agree that there is a need for user-friendly resource limitations in browsers and some suggested the other ways we can prevent JavaScript bloat. One of them said it is good to add resource-limiting tabs on CPU usage, number of HTTP requests and memory usage: “CPU usage allows an initial burst, but after a few seconds dial down to max ~0.5% of CPU, with additional bursts allowed after any user interaction like click or keyboard) Number of HTTP requests (again, initial bursts allowed and in response to user interaction, but radically delay/queue requests for the sites that try to load a new ad every second even after the page has been loaded for 10 minutes) Memory usage (probably the hardest one to get right though)” Another user adds, “With that said, I do hope we're able to figure out how to treat web "sites" and web "apps" differently - for the former, I want as little JS as possible since that just gets in the way of content, but for the latter, the JS is necessary to get the app running, and I don't mind if its a few megabytes in size.” You can read the bug reported on WebKit Bugzilla. D3.js 5.8.0, a JavaScript library for interactive data visualizations in browsers, is now out! 16 JavaScript frameworks developers should learn in 2019 npm JavaScript predictions for 2019: React, GraphQL, and TypeScript are three technologies to learn
Read more
  • 0
  • 0
  • 16755
article-image-llvm-webassembly-backend-will-soon-become-emscriptens-default-backend-v8-announces
Bhagyashree R
02 Jul 2019
3 min read
Save for later

LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces

Bhagyashree R
02 Jul 2019
3 min read
Yesterday, the team behind V8, an open source JavaScript engine, shared the work they with the community have been doing to make LLVM WebAssembly the default backend for Emscripten. LLVM is a compiler framework and Emscripten is an LLVM-to-Web compiler. https://twitter.com/v8js/status/1145704863377981445 The LLVM WebAssembly backend will be the third backend in Emscripten. The original compiler was written in JavaScript which used to parse LLVM IR in text form. In 2013, a new backend was written called Fastcomp by forking LLVM, which was designed to emit asm.js. It was a big improvement in code quality and compile times. According to the announcement the LLVM WebAssembly backend beats the old Fastcomp backend on most metrics. Here are the advantages this backend will come with: Much faster linking The LLVM WebAssembly backend will allow incremental compilation using WebAssembly object files. Fastcomp uses LLVM Intermediate Representation (IR) in bitcode files, which means that at the time of linking the IR would be compiled by LLVM. This is why it shows slower link times. On the other hand, WebAssembly object files (.o) already contain compiled WebAssembly code, which accounts for much faster linking. Faster and smaller code The new backend shows significant code size reduction as compared to Fastcomp.  “We see similar things on real-world codebases that are not in the test suite, for example, BananaBread, a port of the Cube 2 game engine to the Web, shrinks by over 6%, and Doom 3 shrinks by 15%!,” shared the team in the announcement. The factors that account for the faster and smaller code is that LLVM has better IR optimizations and its backend codegen is smart as it can do things like global value numbering (GVN). Along with that, the team has put their efforts in tuning the Binaryen optimizer which also helps in making the code smaller and faster as compared to Fastcomp. Support for all LLVM IR While Fastcomp could handle the LLVM IR generated by clang, it often failed on other sources. On the contrary, the LLVM WebAssembly backend can handle any IR as it uses the common LLVM backend infrastructure. New WebAssembly features Fastcomp generates asm.js before running asm2wasm. This makes it difficult to handle new WebAssembly features like tail calls, exceptions, SIMD, and so on. “The WebAssembly backend is the natural place to work on those, and we are in fact working on all of the features just mentioned!,” the team added. To test the WebAssembly backend you just have to run the following commands: emsdk install latest-upstream emsdk activate latest-upstream Read more in detail on V8’s official website. V8 7.2 Beta releases with support for public class fields, well-formed JSON.stringify, and more V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more Google’s V8 7.2 and Chrome 72 gets public class fields syntax; private class fields to come soon
Read more
  • 0
  • 0
  • 16647

article-image-introducing-zero-server-a-zero-configuration-server-for-react-node-js-html-and-markdown
Bhagyashree R
27 Feb 2019
2 min read
Save for later

Introducing Zero Server, a zero-configuration server for React, Node.js, HTML, and Markdown

Bhagyashree R
27 Feb 2019
2 min read
Developers behind the CodeInterview.io and RemoteInterview.io websites have come up with Zero, a web framework to simplify modern web development. Zero takes the overhead of the usual project configuration for routing, bundling, and transpiling to make it easier to get started. Zero applications consist of static and code files. Static files are all non-code files like images, documents, media files, etc. Code files are parsed, bundled, and served by a particular builder for that file type. Zero supports Node.js, React, HTML, Markdown/MDX. Features in Zero server Autoconfiguration Zero eliminates the need for any configuration files in your project folder. Developers will just have to place their code and it will be automatically compiled, bundled, and served. File-system based routing The routing will be based on the file system, for example, if your code is placed in ‘./api/login.js’, it will be exposed at ‘http://domain.com/api/login’. Auto-dependency resolution Dependencies are automatically installed and resolved. To install a specific version of a package, developers just have to create their own package.json. Support for multiple languages Zero supports code written in multiple languages. So, with Zero, you can do things like exposing your TensorFlow model as a Python API, writing user login code in Node.js, all under a single project folder. Better error handling Zero isolate endpoints from each other by running them in their own process. This will ensure that if one endpoint crashes there is no effect on any other component of the application. For instance, if /api/login crashes, there will be no effect on /chatroom page or /api/chat API. It will also automatically restart the crashed endpoints when the next user visits them. To know more about the Zero server, check out its official website. Introducing Mint, a new HTTP client for Elixir Symfony leaves PHP-FIG, the framework interoperability group Google Chrome developers “clarify” the speculations around Manifest V3 after a study nullifies their performance hit argument  
Read more
  • 0
  • 0
  • 16295

article-image-v8-7-5-beta-is-now-out-with-webassembly-implicit-caching-bulk-memory-operations-and-more
Bhagyashree R
17 May 2019
3 min read
Save for later

V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more

Bhagyashree R
17 May 2019
3 min read
Yesterday, the team behind Google Chrome’s JavaScript and WebAssembly engine, V8 announced the release of V8 7.5 beta. As per V8’s release cycle, its stable version will release in coordination with Chrome 75 stable release, which is expected to come out early June. This release comes with WebAssembly implicit caching, bulk memory operations, JavaScript numeric separators for better readability, and more. Few updates in V8 7.5 Beta WebAssembly implicit caching The team is planning to introduce implicit caching of WebAssembly compilation artifacts in Chrome 75, which is similar to Chromium’s JavaScript code cache. Code caching is an important way of optimizing browsers, which reduces the start-up time of commonly visited web pages by caching the result of parsing and compilation. This essentially means that if a user visits the same web page a second time, the already-seen WebAssembly modules will not be compiled again, and will instead be loaded from the cache. WebAssembly bulk memory operations V8 7.5 will come with a few new WebAssembly instructions for updating large regions of memory or tables. The following are some of these instructions: memory.fill: It fills a memory region with a given byte. memory.copy: It copies data from a source memory region to a destination region, even if these regions overlap. table.copy: Similar to memory.copy, it copies from one region of a table to another, even if the regions are overlapping. JavaScript numeric separators for better readability The human eye finds it difficult to quickly parse a large numeric literal, especially when it contains long digit repetitions, for instance, 10000000. To improve the readability of long numeric literals, a new feature is added that allows using underscores as a separator creating a visual separation between groups of digits. This feature works with both integers and floating point. Streaming script source data directly from the network In previous Chrome versions, the script source data coming in from the network always had to first go to the Chrome main thread before it was forwarded to the streamer. This made the streaming parser to wait for data that has already arrived from the network but hadn’t been forwarded to the streaming task yet because it was blocked at the main thread. Starting from Chrome 75, V8 will be able to stream scripts directly from the network into the streaming parser, without waiting for the Chrome main thread. To know more, check out the official announcement on V8 Blog. Electron 5.0 ships with new versions of Chromium, V8, and Node.js Introducing Node.js 12 with V8 JavaScript engine, improved worker threads, and much more V8 7.2 Beta releases with support for public class fields, well-formed JSON.stringify, and more  
Read more
  • 0
  • 0
  • 16192
article-image-github-plans-to-deprecate-github-services-and-move-to-webhooks-in-2019
Savia Lobo
11 Dec 2018
3 min read
Save for later

GitHub plans to deprecate GitHub services and move to Webhooks in 2019

Savia Lobo
11 Dec 2018
3 min read
On April 25, this year, GitHub announced that it will be shutting down GitHub Services in order to focus on other areas of the API, such as strengthening GitHub Apps and GraphQL, and improving webhooks. According to GitHub, Webhooks are much easier for both users and GitHub staff to debug on the web because of improved logging. GitHub Services has not supported new features since April 25, 2016, and they have also officially deprecated it on October 1st, 2018. The community stated that this functionality will be removed from GitHub.com on January 31st, 2019. The main intention of GitHub Services was to allow third-party developers to submit code for integrating with their services, but this functionality has been superseded by GitHub Apps and webhooks. Since October 1st, 2018, users are denied from adding GitHub services to any repository on GitHub.com, via the UI or API. Users can, however, continue to edit or delete existing GitHub Services. GitHub services vs. webhooks The key differences between GitHub Services and webhooks include: Configuration: GitHub Services have service-specific configuration options, while webhooks are simply configured by specifying a URL and a set of events. Custom logic: GitHub Services can have custom logic to respond with multiple actions as part of processing a single event, while webhooks have no custom logic. Types of requests: GitHub Services can make HTTP and non-HTTP requests, while webhooks can make HTTP requests only. Brownout for GitHub Services During the week of November 5th, 2018, there was a week-long brownout for GitHub Services. Any GitHub Service installed on a repository did not receive any payloads. Normal GitHub Services operations were resumed at the conclusion of the brownout. The main motivation behind the brownout was to allow GitHub users and integrators to see the places that GitHub Services are still being used and begin working towards migrating away from GitHub Services. However, they decided that a week-long brownout would be too disruptive for everyone. Instead, they plan to do a gradual increase in brownouts until the final blackout date of January 31st, 2019. The community announced that on January 31, 2019, they will permanently stop delivering all installed services' events on GitHub.com. As per the updated deprecation timeline: On December 12th, 2018, GitHub Service deliveries will be suspended for a full 24 hours. On January 7th, 2019, GitHub Services will be suspended for a full 7 days. Following that, regular deliveries will resume January 14th, 2019. Users should ensure that their repositories use newer APIs available for handling events. The following changes have taken place since October 1st, 2018: The "Create a hook" endpoint that accepted a required argument called name, which can be set to web for webhooks, or the name of any valid service. Starting October 1st, this endpoint does not require a name to be provided; if it is, it will only accept web as a valid value. Stricter API validation was enforced on November 1st. The name is no longer necessary as a required argument, and requests sending this value are rejected. To learn more about this deprecation, check out Replacing GitHub Services. GitHub introduces Content Attachments API (beta) Microsoft Connect(); 2018 Azure updates: Azure Pipelines extension for Visual Studio Code, GitHub releases and much more! GitHub acquires Spectrum, a community-centric conversational platform
Read more
  • 0
  • 0
  • 15856

article-image-node-v11-2-0-released-with-major-updates-in-timers-windows-http-parser-and-more
Amrata Joshi
16 Nov 2018
2 min read
Save for later

Node v11.2.0 released with major updates in timers, windows, HTTP parser and more

Amrata Joshi
16 Nov 2018
2 min read
Yesterday, the Node.js community released Node v11.2.0. This new version comes with a new experimental HTTP parser (llhttp), timers, windows and more. Node v11.1.0 was released earlier this month. Major updates Node v11.2.0 comes with a major update in timers, fixing an issue that could cause setTimeout to stop working as expected. If the node.pdb file is available, a crashing process will now show the names of stack frames This version improves the installer's new stage that installs native build tools. Node v11.2.0 adds prompt to tools installation script which gives a visible warning and a prompt that lessens the probability of users skipping ahead without reading. On Windows, the windowsHide option has been set to false. This will let the detached child processes and GUI apps to start in a new window. This version also introduced an experimental `llhttp` HTTP parser. llhttp is written in human-readable TypeScript. It is verifiable and easy to maintain. This llparser is used to generate the output C and/or bitcode artifacts, which can be compiled and linked with the embedder's program (like Node.js). The eventEmitter.emit() method has been added to v11.2.0. This method allows an arbitrary set of arguments to be passed to the listener functions. Improvements in Cluster The cluster module allows easy creation of child processes for sharing server ports. The cluster module now supports two methods of distributing incoming connections. The first one is the round robin approach which is default on all platforms except Windows. The master process listens on a port, they accept new connections and distribute them across the workers in a round-robin fashion. This approach avoids overloading a worker process. In the second process, the master process creates the listen socket and sends it to interested workers. The workers then accept incoming connections directly. Theoretically, the second approach gives the best performance. Read more about this release on the official page of Node.js. Node.js v10.12.0 (Current) released Node.js and JS Foundation announce intent to merge; developers have mixed feelings low.js, a Node.js port for embedded systems
Read more
  • 0
  • 0
  • 15744
Modal Close icon
Modal Close icon