Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Server-Side Web Development

85 Articles
article-image-introducing-stateful-functions-an-os-framework-to-easily-build-and-orchestrate-distributed-stateful-apps-by-apache-flinks-ververica
Vincy Davis
19 Oct 2019
3 min read
Save for later

Introducing Stateful Functions, an OS framework to easily build and orchestrate distributed stateful apps, by Apache Flink’s Ververica

Vincy Davis
19 Oct 2019
3 min read
Last week, Apache Flink’s stream processing company Ververica announced the launch of Stateful Functions. It is an open source framework developed to reduce the complexity of building and orchestrating distributed stateful applications. It is built with an aim to bring together the benefits of stream processing with Apache Flink and Function-as-a-Service (FaaS). Ververica will propose the project, licensed under Apache 2.0, to the Apache Flink community as an open source contribution. Read More: Apache Flink 1.9.0 releases with Fine-grained batch recovery, State Processor API and more The co-founder and CTO at Ververica, Stephan Ewen says, “Orchestration for stateless compute has come a long way, driven by technologies like Kubernetes and FaaS — but most offerings still fall short for stateful distributed applications.” He further adds, “Stateful Functions is a big step towards addressing those shortcomings, bringing the seamless state management and consistency from modern stream processing to space.” Stateful Functions is designed as a simple and powerful abstraction based on functions that can interact with each other asynchronously. It is also composed of complex networks of functionality. This approach helps in eliminating the requirement of additional infrastructure for application state management, reduces operational overhead and also the overall system complexity. The stateful functions are aimed to help users define independent functions with a small footprint, thus enabling the resources to interact reliably with each other. Each function has a persistent user-defined state in local variables that can be used to arbitrarily message other functions. The stateful function framework simplifies use cases such as: Asynchronous application processes (checkout, payment, logistics) Heterogeneous, load-varying event stream pipelines (IoT event rule pipelines) Real-time context and statistics (ML feature assembly, recommenders) The runtime of stateful functions API is based on the stream processing capability of Apache Flink. It also extends its powerful model for state management and fault tolerance. The major advantage of this framework is that the state and computation are co-located on the same side of the network. This means that “you don’t need the round-tripper record to fetch state from an external storage system nor a specific state management pattern for consistency.”  Though the stateful functions API is independent of Flink, its runtime is built on top of Flink’s DataStream API and uses a lightweight version of process functions. “The core advantage here, compared to vanilla Flink, is that functions can arbitrarily send events to all other functions, rather than only downstream in a DAG,” stated the official blog. Image source: Ververica blog As shown in the above figure, the applications of stateful functions include the multiple bundles of functions that are multiplexed into a single Flink application. This enables them to interact consistently and reliably with each other. This enables the many small jobs to share the same pool of resources and tackle them as needed. Many Twitterati are excited about this announcement. https://twitter.com/sijieg/status/1181518992541933568 https://twitter.com/PasqualeVazzana/status/1182033530269949952 https://twitter.com/acmurthy/status/1181574696451620865 Head over to the stateful functions website to know more details. OpenBSD 6.6 comes with GCC disabled in base for ARMv7 and i386, SMP Improvements, and more Developers ask for an option to disable Docker Compose from automatically reading the .env file Ubuntu 19.10 releases with MicroK8s add-ons, GNOME 3.34, ZFS on root, NVIDIA-specific improvements, and much more! Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices
Read more
  • 0
  • 0
  • 21985

article-image-haproxy-2-0-released-with-kubernetes-ingress-controller-layer-7-retries-polyglot-extensibility-grpc-support-and-more
Vincy Davis
17 Jun 2019
6 min read
Save for later

HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more

Vincy Davis
17 Jun 2019
6 min read
Last week, HAProxy 2.0 was released with critical features of cloud-native and containerized environments. This is an LTS (Long-term support) release, which includes a powerful set of core features such as Layer 7 retries, Cloud-Native threading and logging, polyglot extensibility, gRPC support and more, and will improve the seamless support for integration into modern architectures. In conjunction with this release, the HAProxy team has also introduced the HAProxy Kubernetes Ingress Controller and the HAProxy Data Plane API. The founder of HAProxy Technologies, Willy Tarreau, has said that these developments will come with HAProxy 2.1 version. The HAProxy project has also opened up issue submissions on its HAProxy GitHub account. Some features of HAProxy 2.0 Cloud-Native Threading and Logging HAProxy can now scale to accommodate any environment with less manual configuration. This will enable the number of worker threads to match the machine’s number of available CPU cores. The process setting is no longer required, thus simplifying the bind line. Two new build parameters have been added: MAX_THREADS and MAX_PROCS, which avoids allocating huge structs. Logging has been made easier for containerized environments. Direct logging to stdout and stderr, or to a file descriptor is now possible. Kubernetes Ingress Controller The HAProxy Kubernetes Ingress Controller provides a high-performance ingress for the Kubernetes-hosted applications. It supports TLS offloading, Layer 7 routing, rate limiting, whitelisting. Ingresses can be configured through either ConfigMap resources or annotations. The Ingress Controller gives users the ability to : Use only one IP address and port and direct requests to the correct pod based on the Host header and request path Secure communication with built-in SSL termination Apply rate limits for clients while optionally whitelisting IP addresses Select from among any of HAProxy's load-balancing algorithms Get superior Layer 7 observability with the HAProxy Stats page and Prometheus metrics Set maximum connection limits to backend servers to prevent overloading services Layer 7 Retries With HAProxy 2.0, it will be possible to retry from another server at Layer 7 for failed HTTP requests. The new configuration directive, retry-on, can be used in defaults, listen, or backend section. The number of attempts at retrying can be specified using the retries directive. The full list of retry-on options is given on the HAProxy blog. HAProxy 2.0 also introduces a new http-request action called disable-l7-retry. It allows the user to disable any attempt to retry the request if it fails for any reason other than a connection failure. This can be useful to make sure that POST requests aren’t retried. Polyglot Extensibility The Stream Processing Offload Engine (SPOE) and Stream Processing Offload Protocol (SPOP) were introduced in HAProxy 1.7. It aimed to create the extension points necessary to build upon HAProxy using any programming language. From HAProxy 2.0, the following libraries and examples will be available in the following languages and platforms: C .NET Core Golang Lua Python gRPC HAProxy 2.0 delivers full support for the open-source RPC framework, gRPC. This allows bidirectional streaming of data, detection of gRPC messages, and logging gRPC traffic. Two new converters, protobuf and ungrpc, have been introduced, to extract the raw Protocol Buffer messages. Using Protocol Buffers, gRPC enables users to serialize messages into a binary format that’s compact and potentially more efficient than JSON. Users need to set up a standard end-to-end HTTP/2 configuration, to start using gRPC in HAProxy. HTTP Representation (HTX) The Native HTTP Representation (HTX) was introduced with HAProxy 1.9. Starting from 2.0, it will be enabled by default. HTX creates strongly typed, well-delineated header fields and allows for gaps and out-of-order fields. It also allows HAProxy to maintain consistent semantics from end-to-end and provides higher performance when translating HTTP/2 to HTTP/1.1 or vice versa. LTS Support for 1.9 Features HAProxy 2.0 bring LTS support for many features that were introduced or improved upon during the 1.9 release. Some are them are specified below: Small Object Cache with an increased caching size up to 2GB, set with the max-object-size directive. The total-max-size setting determines the total size of the cache and can be increased up to 4095MB. New fetches like date_us, cpu_calls and more have been included which will report either an internal state or from layer 4, 5, 6, and 7. New converters like strcmp, concat and more allow to transform data within HAProxy Server Queue Priority Control, lets the users to prioritize some queued connections over others. This is helpful to deliver JavaScript or CSS files before images. The resolvers section supports using resolv.conf by specifying parse-resolv-conf. The HAProxy team has planned to build HAProxy 2.1 with features like UDP Support, OpenTracing and Dynamic SSL Certificate Updates. The HAProxy inaugural community conference, HAProxyConf is scheduled to take place in Amsterdam, Netherlands on November 12-13, 2019. A user on Hacker News comments, “HAProxy is probably the best proxy server I had to deal with ever. It's performance is exceptional, it does not interfere with L7 data unless you tell it to and it's extremely straightforward to configure reading the manual.” While some are busy comparing HAProxy with the nginx web server. A user says that “In my previous company we used to use HAProxy, and it was a hassle. Yes, it is powerful. However, nginx is way easier to configure and set up, and performance wise is a contender for most usual applications people needed. nginx just fulfills most people's requirements for reverse proxy and has solid HTTP/2 support (and other features) for way longer.” Another user states that “Big difference is that haproxy did not used to support ssl without using something external like stunnel -- nginx basically did it all out of the box and I haven't had a need for haproxy in quite some time now.” While others suggest that HAProxy is trying hard to stay equipped with the latest features in this release. https://twitter.com/garthk/status/1140366975819849728 A user on Hacker News agrees by saying that “These days I think HAProxy and nginx have grown a lot closer together on capabilities.” Visit the HAProxy blog for more details about HAProxy 2.0. HAProxy introduces stick tables for server persistence, threat detection, and collecting metrics MariaDB announces the release of MariaDB Enterprise Server 10.4 Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service
Read more
  • 0
  • 0
  • 21926

article-image-memory-usage-optimizations-implemented-in-v8-lite-can-benefit-v8
Sugandha Lahoti
13 Sep 2019
4 min read
Save for later

New memory usage optimizations implemented in V8 Lite can also benefit V8

Sugandha Lahoti
13 Sep 2019
4 min read
V8 Lite was released in late 2018 in V8 version 7.3 to dramatically reduce V8’s memory usage. V8 is Google’s open-source JavaScript and WebAssembly engine, written in C++. V8 Lite provides a 22% reduction in typical web page heap size compared to V8 version 7.1 by disabling code optimization, not allocating feedback vectors and performed aging of seldom executed bytecode. Initially, this project was envisioned as a separate Lite mode of V8. However, the team realized that many of the memory optimizations could be used in regular V8 thereby benefiting all users of V8. The team realized that most of the memory savings of Lite mode with none of the performance impact can be achieved by making V8 lazier. They performed Lazy feedback allocation, Lazy source positions, and Bytecode flushing to bring V8 Lite memory optimizations to regular V8. Read also: LLVM WebAssembly backend will soon become Emscripten default backend, V8 announces Lazy allocation of Feedback Vectors The team lazily allocated feedback vectors after a function executes a certain amount of bytecode (currently 1KB). Since most functions aren’t executed very often, they avoid feedback vector allocation in most cases but quickly allocate them where needed, to avoid performance regressions and still allow code to be optimized. One hitch was that lazy allocation of feedback vectors did not allow feedback vectors to form a tree. To address this, they created a new ClosureFeedbackCellArray to maintain this tree, then swap out a function’s ClosureFeedbackCellArray with a full FeedbackVector when it becomes hot. The team says that they, “have enabled lazy feedback allocation in all builds of V8, including Lite mode where the slight regression in memory compared to their original no-feedback allocation approach is more than compensated by the improvement in real-world performance.” Compiling bytecode without collecting source positions Source position tables are generated when compiling bytecode from JavaScript. However, this information is only needed when symbolizing exceptions or performing developer tasks such as debugging. To avoid this waste, bytecode is now compiled without collecting source positions. The source positions are only collected when a stack trace is actually generated. They have also fixed bytecode mismatches and added checks and a stress mode to ensure that eager and lazy compilation of a function always produces consistent outputs. Flush compiled bytecode from functions not executed recently Bytecode compiled from JavaScript source takes up a significant chunk of V8 heap space. Therefore, now compiled bytecode is flushed from functions during garbage collection if they haven’t been executed recently. They also flush feedback vectors associated with the flushed functions. To keep track of the age of a function’s bytecode, they have incremented the age after every major garbage collection, and reset it to zero when the function is executed. Additional memory optimizations Reduce the size of FunctionTemplateInfo objects. The FunctionTemplateInfo object is split such that the rare fields are stored in a side-table which is only allocated on demand if required. The TurboFan optimized code is now deoptimized such that deopt points in optimized code load the deopt id directly before calling into the runtime. Read also: V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more. Result comparison for V8 Lite and V8 Source: V8 blog People on Hacker News appreciated the work done by the team being V8. A comment reads, “Great engineering stuff. I am consistently amazed by the work of V8 team. I hope V8 v7.8 makes it to Node v12 before its LTS release in coming October.” Another says, “At the beginning of the article, they are talking about building a "v8 light" for embedded application purposes, which was pretty exciting to me, then they diverged and focused on memory optimization that's useful for all v8. This is great work, no doubt, but as the most popular and well-tested JavaScript engine, I'd love to see a focus on ease of building and embedding.” https://twitter.com/vpodk/status/1172320685634420737 More details are available on the V8 blog. Other interesting news in Tech Google releases Flutter 1.9 at GDD (Google Developer Days) conference Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Apple’s September 2019 Event: iPhone 11 Pro and Pro Max, Watch Series 5, new iPad, and more.
Read more
  • 0
  • 0
  • 21494

article-image-wasmer-introduces-webassembly-interfaces-for-validating-the-imports-and-exports-of-a-wasm-module
Bhagyashree R
18 Jul 2019
2 min read
Save for later

Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module

Bhagyashree R
18 Jul 2019
2 min read
Yesterday, Syrus Akbary, the founder and CEO of Wasmer, introduced WebAssembly interfaces. It provides a convenient s-expression (symbolic expression) text format that can be used to validate the imports and exports of a Wasm module. Why WebAssembly Interfaces are needed? The Wasmer runtime initially supported only running Emscripten-generated modules and later on added support for other ABIs including WASI and Wascap. WebAssembly runtimes like Wasmer have to do a lot of checks before starting an instance. It does that to ensure a WebAssembly module is compliant with a certain Application Binary Interface (Emscripten or WASI). It checks whether the module imports and exports are what the runtime expects, namely the function signatures and global types match. These checks are important for: Making sure a module is going to work with a certain runtime. Assuring a module is compatible with a certain ABI. Creating a plugin ecosystem for any program that uses WebAssembly as part of its plugin system. The team behind Wasmer introduced WebAssembly Interfaces to ease this process by providing a way to validate imports and exports are as expected. This is how a WebAssembly Interface for WASI looks like: Source: Wasmer WebAssembly Interfaces allow you to run various programs with each ABI, such as Nginx (Emscripten) and Cowsay (WASI). When used together with WAPM (WebAssembly Package Manager), you will also be able to make use of the entire WAPM ecosystem to create, verify, and distribute plugins. They have also proposed it as a standard for defining a specific set of imports and exports that a module must have, in a way that is statically analyzable. Read the official announcement by Wasmer. Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces Qt 5.13 releases with a fully-supported WebAssembly module, Chromium 73 support, and more
Read more
  • 0
  • 0
  • 21132

article-image-node-js-v10-12-0-current-released
Sugandha Lahoti
11 Oct 2018
4 min read
Save for later

Node.js v10.12.0 (Current) released

Sugandha Lahoti
11 Oct 2018
4 min read
Node.js v10.12.0 was released, yesterday, with notable changes to assert, cli, crypto, fs, and more. However, the Node.js API is still somewhat changing, and as it matures, certain parts are more reliable than others. Hence throughout the v10.12.0 documentation are indications of a section's stability. Let’s look at the notable changes which are stable. Assert module Changes have been made to assert. The assert module provides a simple set of assertion tests that can be used to test invariants. It comprises of a strict mode and a legacy mode, although it is recommended to only use strict mode. In Node.js v10.12.0, the diff output is now improved by sorting object properties when inspecting the values that are compared with each other. Changes to cli The command line interface in Node.js v10.12.0 has two improvements: The options parser now normalizes _ to - in all multi-word command-line flags, e.g. --no_warnings has the same effect as --no-warnings. It also includes bash completion for the node binary. Users can generate a bash completion script with run node --completion-bash. The output can be saved to a file which can be sourced to enable completion. Crypto Module The crypto module provides cryptographic functionality that includes a set of wrappers for OpenSSL's hash, HMAC, cipher, decipher, sign, and verify functions. In Node.js v10.12.0, crypto adds support for PEM-level encryption. It also supports API asymmetric key pair generation. The new methods crypto.generateKeyPair and crypto.generateKeyPairSync can be used to generate public and private key pairs. The API supports RSA, DSA and EC and a variety of key encodings (both PEM and DER). Improvements to file system The fs module provides an API for interacting with the file system in a manner closely modeled around standard POSIX functions. Node.js v10.12.0 adds a recursive option to fs.mkdir and fs.mkdirSync. On setting this option to true, non-existing parent folders will be automatically created. Updates to Http/2 The http2 module provides an implementation of the HTTP/2 protocol. The new node.js version adds support for a 'ping' event to Http2Session that is emitted whenever a non-ack PING is received. Support is also added for the ORIGIN frame.  Also, nghttp2 is updated to v1.34.0. This adds RFC 8441 extended connect protocol support to allow the use of WebSockets over HTTP/2. Changes in module In the Node.js module system, each file is treated as a separate module. Module has also been updated in v10.12.0. It adds module.createRequireFromPath(filename). This new method can be used to create a custom require function that will resolve modules relative to the filename path. Improvements to process The process object is a global that provides information about, and control over, the current Node.js process. Process adds a 'multipleResolves' process event that is emitted whenever a Promise is attempted to be resolved multiple times. Updates to url Node.js v10.12.0 adds url.fileURLToPath(url) and url.pathToFileURL(path). These methods can be used to correctly convert between file: URLs and absolute paths. Changes in Utilities The util module is primarily designed to support the needs of Node.js' own internal APIs. The changes in Node.js v10.12.0 include: A new sorted option is added to util.inspect(). If set to true, all properties of an object and Set and Map entries will be sorted in the returned string. If set to a function, it is used as a compare function. The util.instpect.custom symbol is now defined in the global symbol registry as Symbol.for('nodejs.util.inspect.custom'). Support for BigInt numbers in util.format() are also added. Improvements in V8 API The V8 module exposes APIs that are specific to the version of V8 built into the Node.js binary. A number of V8 C++ APIs in v10.12.0 have been marked as deprecated since they have been removed in the upstream repository. Replacement APIs are added where necessary. Changes in Windows The Windows msi installer now provides an option to automatically install the tools required to build native modules. You can find the list of full changes on the Node.js Blog. Node.js and JS Foundation announce intent to merge; developers have mixed feelings. Node.js announces security updates for all their active release lines for August 2018. Deploying Node.js apps on Google App Engine is now easy.
Read more
  • 0
  • 0
  • 20551

article-image-cloudflare-and-google-chrome-add-http-3-and-quic-support-mozilla-firefox-soon-to-follow-suit
Bhagyashree R
30 Sep 2019
5 min read
Save for later

Cloudflare and Google Chrome add HTTP/3 and QUIC support; Mozilla Firefox soon to follow suit

Bhagyashree R
30 Sep 2019
5 min read
Major web companies are adopting HTTP/3, the latest iteration of the HTTP protocol, in their experimental as well as production systems. Last week, Cloudflare announced that its edge network now supports HTTP/3. Earlier this month, Google’s Chrome Canary added support for HTTP/3 and Mozilla Firefox will soon be shipping support in a nightly release this fall. The ‘curl’ command-line client also has support for HTTP/3. In an announcement, Cloudflare shared that customers can turn on HTTP/3 support for their domains by enabling an option in their dashboards. “We’ve been steadily inviting customers on our HTTP/3 waiting list to turn on the feature (so keep an eye out for an email from us), and in the coming weeks we’ll make the feature available to everyone,” the company added. Last year, Cloudflare announced preliminary support for QUIC and HTTP/3. Customers could also join a waiting list to try QUIC and  HTTP/3 as soon as they become available. Those customers who are on the waiting list and have received an email from Cloudflare can enable the support by flipping the switch from the "Network" tab on the Cloudflare dashboard. Cloudflare further added, “We expect to make the HTTP/3 feature available to all customers in the near future.” Cloudflare’s HTTP/3 and QUIC support is backed by quiche. It is an implementation of the QUIC transport protocol and HTTP/3 written in Rust. It provides a low-level API for processing QUIC packets and handling connection state. Why HTTP/3 is introduced HTTP 1.0 required the creation of a new TCP connection for each request/response exchange between the client and the server, which resulted in latency and scalability issues. To resolve these issues, HTTP/1.1 was introduced. It included critical performance improvements such as keep-alive connections, chunked encoding transfers, byte-range requests, additional caching mechanisms, and more. The keep-alive or persistent connections allowed clients to reuse TCP connections. A keep-alive connection eliminated the need to constantly perform the initial connection establishment step. It also reduced the slow start across multiple requests. However, there were still some limitations. Multiple requests were able to share a single TCP connection, but they still needed to be serialized on after the other. This meant that the client and server could execute only a single request/response exchange at a time for each connection. HTTP/2 tried to solve this problem by introducing the concept of HTTP streams. This allowed the transmission of multiple requests/responses over the same connection at the same time. However, the drawback here is that in case of network congestion all requests and responses will be equally affected by packet loss, even if the data that is lost only concerns a single request. HTTP/3 aims to address the problems in the previous versions of HTTP.  It uses a new transport protocol called Quick UDP Internet Connections (QUIC) instead of TCP. The QUIC transport protocol comes with features like stream multiplexing and per-stream flow control. Here’s a diagram depicting the communication between client and server using QUIC and HTTP/3: Source: Cloudflare HTTP/3 provides reliability at the stream level and congestion control across the entire connection. QUIC streams share the same QUIC connection so no additional handshakes are required. As QUIC streams are delivered independently, packet loss affecting one stream will not affect the others. QUIC also combines the typical three-way TCP handshake with TLS 1.3 handshake to provide. This provides users encryption and authentication by default and enables faster connection establishment. “In other words, even when a new QUIC connection is required for the initial request in an HTTP session, the latency incurred before data starts flowing is lower than that of TCP with TLS,” Cloudflare explains. On Hacker News, a few users discussed the differences between HTTP/1, HTTP/2, and HTTP/3. Comparing the three a user commented, “Not aware of benchmarks, but specification-wise I consider HTTP2 to be a regression...I'd rate them as follows: HTTP3 > HTTP1.1 > HTTP2 QUIC is an amazing protocol...However, the decision to make HTTP2 traffic go all through a single TCP socket is horrible and makes the protocol very brittle under even the slightest network delay or packet loss...Sure it CAN work better than HTTP1.1 under ideal network conditions, but any network degradation is severely amplified, to a point where even for traffic within a datacenter can amplify network disruption and cause an outage. HTTP3, however, is a refinement on those ideas and gets pretty much everything right afaik.” Some expressed that the creators of HTTP/3 should also focus on the “real” issues of HTTP including proper session support and getting rid of cookies. Others appreciated this step saying, “It's kind of amazing seeing positive things from monopolies and evergreen updates. These institutions can roll out things fast. It's possible in hardware too-- remember Bell Labs in its hay days?” These were some of the advantages HTTP/3 and QUIC provide over HTTP/2. Read the official announcement by Cloudflare to know more in detail. Cloudflare plans to go public; files S-1 with the SEC Cloudflare finally launches Warp and Warp Plus after a delay of more than five months Cloudflare RCA: Major outage was a lot more than “a regular expression went bad”
Read more
  • 0
  • 0
  • 20537
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-apache-flink-1-9-0-releases-with-fine-grained-batch-recovery-state-processor-api-and-more
Fatema Patrawala
26 Aug 2019
5 min read
Save for later

Apache Flink 1.9.0 releases with Fine-grained batch recovery, State Processor API and more

Fatema Patrawala
26 Aug 2019
5 min read
Last week the Apache Flink community announced the release of Apache Flink 1.9.0. The Flink community defines the project goal as “to develop a stream processing system to unify and power many forms of real-time and offline data processing applications as well as event-driven applications.” In this release, they have made a huge step forward in that effort, by integrating Flink’s stream and batch processing capabilities under a single, unified runtime. There are significant features in this release, namely batch-style recovery for batch jobs and a preview of the new Blink-based query engine for Table API and SQL queries. The team also announced the availability of the State Processor API, one of the most frequently requested features that enables users to read and write savepoints with Flink DataSet jobs. Additionally, Flink 1.9 includes a reworked WebUI and previews of Flink’s new Python Table API and it is integrated with the Apache Hive ecosystem. Let us take a look at the major new features and improvements: New Features and Improvements in Apache Flink 1.9.0 Fine-grained Batch Recovery The time to recover a batch (DataSet, Table API and SQL) job from a task failure is significantly reduced. Until Flink 1.9, task failures in batch jobs were recovered by canceling all tasks and restarting the whole job, i.e, the job was started from scratch and all progress was voided. With this release, Flink can be configured to limit the recovery to only those tasks that are in the same failover region. A failover region is the set of tasks that are connected via pipelined data exchanges. Hence, the batch-shuffle connections of a job define the boundaries of its failover regions. State Processor API Up to Flink 1.9, accessing the state of a job from the outside was limited to the experimental Queryable State. In this release the team introduced a new, powerful library to read, write and modify state snapshots using the batch DataSet API. In practice, this means: Flink job state can be bootstrapped by reading data from external systems, such as external databases, and converting it into a savepoint. State in savepoints can be queried using any of Flink’s batch APIs (DataSet, Table, SQL), for example to analyze relevant state patterns or check for discrepancies in state that can support application auditing or troubleshooting. The schema of state in savepoints can be migrated offline, compared to the previous approach requiring online migration on schema access. Invalid data in savepoints can be identified and corrected. The new State Processor API covers all variations of snapshots: savepoints, full checkpoints and incremental checkpoints. Stop-with-Savepoint Cancelling with a savepoint is a common operation for stopping/restarting, forking or updating Flink jobs. However, the existing implementation did not guarantee output persistence to external storage systems for exactly-once sinks. To improve the end-to-end semantics when stopping a job, Flink 1.9 introduces a new SUSPEND mode to stop a job with a savepoint that is consistent with the emitted data. You can suspend a job with Flink’s CLI client as follows: bin/flink stop -p [:targetDirectory] :jobId The final job state is set to FINISHED on success, allowing users to detect failures of the requested operation. Flink WebUI Rework After a discussion about modernizing the internals of Flink’s WebUI, this component was reconstructed using the latest stable version of Angular — basically, a bump from Angular 1.x to 7.x. The redesigned version is the default in Apache Flink 1.9.0, however there is a link to switch to the old WebUI. Preview of the new Blink SQL Query Processor After the donation of Blink to Apache Flink, the community worked on integrating Blink’s query optimizer and runtime for the Table API and SQL. The team refactored the monolithic flink-table module into smaller modules. This resulted in a clear separation of well-defined interfaces between the Java and Scala API modules and the optimizer and runtime modules. Other important changes in this release: The Table API and SQL are now part of the default configuration of the Flink distribution. Previously, the Table API and SQL had to be enabled by moving the corresponding JAR file from ./opt to ./lib. The machine learning library (flink-ml) has been removed in preparation for FLIP-39. The old DataSet and DataStream Python APIs have been removed in favor of FLIP-38. Flink can be compiled and run on Java 9. Note: that certain components interacting with external systems (connectors, filesystems, reporters) may not work since the respective projects may have skipped Java 9 support. The binary distribution and source artifacts for this release are now available via the Downloads page of the Flink project, along with the updated documentation. Flink 1.9 is API-compatible with previous 1.x releases for APIs annotated with the @Public annotation. You can review the release notes to know about the detailed list of changes and new features to upgrade Flink setup to Flink 1.9.0. Apache Flink 1.8.0 releases with finalized state schema evolution support Apache Flink founders data Artisans could transform stream processing with patent-pending tool Apache Flink version 1.6.0 released!
Read more
  • 0
  • 0
  • 20447

article-image-node-v11-0-0-released
Prasad Ramesh
24 Oct 2018
2 min read
Save for later

Node v11.0.0 released

Prasad Ramesh
24 Oct 2018
2 min read
Node v11.0.0 is released. The focus of this current release is primarily towards improving internals, and performance. It is an update to the stable V8 7.0. Build and console changes in Node v11.0.0 Build: FreeBSD 10 supported is removed. child_process: The default value of the windowsHide option is now to true. console: The console.countReset() function will emit a warning if the timer being reset does not exist. If a timer already exists, console.time() will no longer reset it. Dependency and http changes Under dependencies, the Chrome V8 engine has been updated to the v7.0. fs: The fs.read() method now needs a callback. The fs.SyncWriteStream utility was deprecated previously, it has now been removed. http: In Node v11.0.0 the http, https, and tls modules use the WHATWG URL parser by default. General changes In general changes, process.binding() has been deprecated and can no longer be used. Userland code using process.binding() should re-evaluate its use initiate migration. There is an experimental implementation of queueMicrotask() added. Internal changes Under internal changes, the Windows performance-counter support has been removed. The --expose-http2 command-line option has also been removed. In Timers, interval timers will be rescheduled even if previous interval gave an error. The nextTick queue will be run after each immediate and timer. Changes in utilities The WHATWG TextEncoder and TextDecoder APIs are now global. The util.inspect() method’s output size is limited to 128 MB by default. When NODE_DEBUG is set for either http or http2, a runtime warning will be emitted. Some other additions Some other utilities have been added like: '-z relro -z now' linker flags internal PriorityQueue class InitializeV8Platform function string-decoder fuzz test new_large_object_space heap space dns memory error test warnings when NODE_DEBUG is set as http/http2 Inspect suffix to BigInt64Array elements For more details and a complete list of changes, visit the Node website. Deno, an attempt to fix Node.js flaws, is rewritten in Rust npm at Node+JS Interactive 2018: npm 6, the rise and fall of JavaScript frameworks, and more The top 5 reasons why Node.js could topple Java
Read more
  • 0
  • 0
  • 20081

article-image-python-3-7-as-the-second-generation-google-app-engine-standard-runtime
Sugandha Lahoti
09 Aug 2018
2 min read
Save for later

Python 3.7 beta is available as the second generation Google App Engine standard runtime

Sugandha Lahoti
09 Aug 2018
2 min read
Google has announced the availability of Python 3.7 in beta on the App Engine standard environment. Developers can now easily run their web apps using up-to-date versions of popular languages, frameworks, and libraries, with Python being one of them. The Second Generation runtimes remove previous App Engine restrictions, giving developers the ability to write portable web apps and microservices. Now web apps can take full advantage of App Engine features such as auto-scaling, built-in security, and pay-per-use billing model. Python 3.7 was introduced as one of the new Second Generation runtimes at Cloud Next. Python 3.7 runtime brings developers up-to-date with the language community's progress. As a Second Generation runtime, it enables a faster path to continued runtime updates. It also supports arbitrary third-party libraries, including those that rely on C code and native extensions. The new Python 3.7 runtime also supports the Google Cloud client libraries. Developers can integrate GCP services into their app, and run it on App Engine, Compute Engine or any other platform. LumApps, a Paris-based provider of enterprise Intranet software, has chosen App Engine to optimize for scale and developer productivity. Elie Mélois, CTO & Co-founder, LumApps says, "With the new Python 3.7 runtime on App Engine standard, we were able to deploy our apps very quickly, using libraries that we wanted such as scikit. App Engine helped us scale our platform from zero to over 2.5M users, from three developers to 40—all this with only one DevOps person! " Check out the documentation to start using Python 3.7 today on the App Engine standard environment. Deploying Node.js apps on Google App Engine is now easy Hosting on Google App Engine Should you move to Python 3? 7 Python experts’ opinions
Read more
  • 0
  • 0
  • 19729

article-image-ietf-proposes-json-meta-application-protocol-jmap-as-the-next-standard-for-email-protocols
Bhagyashree R
22 Jul 2019
4 min read
Save for later

IETF proposes JSON Meta Application Protocol (JMAP) as the next standard for email protocols

Bhagyashree R
22 Jul 2019
4 min read
Last week, the Internet Engineering Task Force (IETF) published JSON Meta Application Protocol (JMAP) as RFC 8260, now marked as “Proposed Standard”. The protocol is authored by Neil Jenkins, Director and UX Architect at Fastmail and Chris Newman, Principle Engineer at Oracle. https://twitter.com/Fastmail/status/1152281229083009025 What is JSON Meta Application Protocol (JMAP)? Fastmail started working on JMAP in 2014 as an internal development project. It is an internet protocol that handles the submission and synchronization of emails, contacts, and calendars between a client and a server providing a consistent interface to different data types. It is developed to be a possible successor to IMAP and a potential replacement for the CardDAV and CalDAV standards. Why is it needed? According to the developers, the current standards for email protocols, that is IMAP and SMTP, for client-server communication are outdated and complicated. They are not well-suited for modern mobile networks and high-latency scenarios. These limitations in current standards have led to stagnation in the development of new good email clients. Many have also started coming up with proprietary alternatives like Gmail, Outlook, Nylas, and Context.io. Another drawback is that many mobile email clients proxy everything via their own server instead of talking directly to the user’s mail store, for example, Outlook and Newton. This is not only bad for client authors who have to run server infrastructure in addition to just building their clients, but also raises security and privacy concerns. Here’s a video by FastMail explaining the purpose behind JMAP: https://www.youtube.com/watch?v=8qCSK-aGSBA How JMAP solves the limitations in current standards? JMAP is designed to be easier for developers to work with and enable efficient use of network resources. Here are some of its properties that address the limitations in current standards: Stateless: It does not require a persistent connection, which fits best for mobile environments. Immutable Ids: It is more like NFS or filesystems with inodes rather than a name-based hierarchy, which makes renaming easy to detect and cheap to sync. Batchable API calls: It batches multiple API calls in a single request to the server resulting in reduced round trips and better battery life for mobile users. Provides flood control: The client can put limits on how much data the server is allowed to send. For instance, the command will return a ‘tooManyChanges’ error on exceeding the client’s limit, rather than returning a million * 1 EXPUNGED lines as can happen in IMAP. No custom parser required: Support for JSON, a well understood and widely supported encoding format, makes it easier for developers to get started. A backward compatible data model: Its data model is backward compatible with both IMAP folders and Gmail-style labels. Fastmail is already using JMAP in production for its Fastmail and Topicbox products. It is also seeing some adoption in organizations like the Apache Software Foundation, who added experimental support for JMAP in its free mail server Apache in version 3.0. Many developers are happy about this announcement. A user on Hacker News said, “JMAP client and the protocol impresses a lot. Just 1 to a few calls, you can re-sync entire emails state in all folders. With IMAP need to select each folder to inspect its state. Moreover, just a few IMAP servers support fast synchronization extensions like QRESYNC or CONDSTORE.” However, its use of JSON did spark some debate on Hacker News. “JSON is an incredibly inefficient format for shareable data: it is annoying to write, unsafe to parse and it even comes with a lot of overhead (colons, quotes, brackets and the like). I'd prefer s-expressions,” a user commented. To stay updated with the current developments in JMAP, you can join its mailing list. To read more about its specification check out its official website and also its GitHub repository. Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial] Google announces the general availability of AMP for email, faces serious backlash from users Sublime Text 3.2 released with Git integration, improved themes, editor control and much more!  
Read more
  • 0
  • 0
  • 19649
article-image-former-npm-cto-introduces-entropic-a-federated-package-registry-with-a-new-cli-and-much-more
Amrata Joshi
03 Jun 2019
3 min read
Save for later

Former npm CTO introduces Entropic, a federated package registry with a new CLI and much more!

Amrata Joshi
03 Jun 2019
3 min read
Yesterday, at JSConfEU '19, the team behind Entropic announced Entropic, a federated package registry with a new CLI that works smoothly with the network.  Entropic is also Apache 2 licensed and is federated. It mirrors all packages that users install from the legacy package manager. Entropic offers a new file-centric API and a content-addressable storage system that minimizes the amount of data that should be retrieved over a network. This file-centric approach also applies to the publication API. https://www.youtube.com/watch?v=xdLMbvEc2zk C J Silverio, Principal Engineer at Eaze said during the announcement, “I actually believe in open source despite everything I think it's good for us as human beings to give things away to each other but I think it's important. It's going to be plenty for my work so Chris tickets in news isn't it making out Twitter moment now Christensen and I have the natural we would like to give something away to you all right now.” https://twitter.com/kosamari/status/1134876898604048384 https://twitter.com/i/moments/1135060936216272896 https://twitter.com/colestrode/status/1135320460072296449 Features of Entropic Package specifications All the Entropic packages are namespaced, and a full Entropic package spec includes the hostname of its registry. The package specifications are also fully qualified with a namespace, hostname, and package name. They appear to be: namespace@example.com/pkg-name. For example, the ds cli is specified by chris@entropic.dev/ds. If a user publishes a package to their local registry that depends on packages from other registries, then the local instance will mirror all the packages on which the user’s package depend on. The team aims to keep each instance entirely self-sufficient, so installs aren’t dependent on a resource that might vanish. And the abandoned packages are moved to the abandonware namespace. The packages can be easily updated by any user in the package's namespace and can also have a list of maintainers. The ds cli Entropic requires a new command-line client known as ds or "entropy delta". According to the Entropic team, the cli doesn't have a very sensible shell for running commands yet. Currently, if users want to install packages using ds then they can now run ds build in a directory with a Package.toml to produce a ds/node_modules directory. The GitHub page reads, “This is a temporary situation!” But Entropic appears to be more like an alternative to npm as it seeks to address the limitations of the ownership model of npm.Inc. It aims to shift from centralized ownership to federated ownership, to restore power back to the commons. https://twitter.com/deluxee/status/1135489151627870209 To know more about this news, check out the GitHub page. GitHub announces beta version of GitHub Package Registry, its new package management service npm Inc. announces npm Enterprise, the first management code registry for organizations Using the Registry and xlswriter modules
Read more
  • 0
  • 0
  • 19533

article-image-mozilla-firefox-will-soon-support-letterboxing-an-anti-fingerprinting-technique-of-the-tor-broswer
Bhagyashree R
07 Mar 2019
2 min read
Save for later

Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser

Bhagyashree R
07 Mar 2019
2 min read
Yesterday, ZDNet shared that Mozilla will be adding a new anti-fingerprinting technique called letterboxing to Firefox 67, which is set to release in May this year. Letterboxing is part of the Tor Uplift project that started back in 2016 and is currently available for Firefox Nightly users. As part of the Tor Uplift project, the team is slowing bringing the privacy-focused features of Tor Browser to Firefox. For instance, Firefox 55 came with support for a Tor Browser feature called First-Party Isolation (FPI). This feature prevented ad trackers from using cookies to track user activity by separating cookies on a per-domain basis. What is letterboxing and why it is needed? The dimensions of a browser window can act as a big source of finger-printable data that can be used by advertising networks. These advertising networks can use browser window sizes to create user profiles and track users as they resize their browser and move across new URLs and browser tabs. To maintain online privacy of users, it is important to protect this window dimension data continuously even if users resize or maximize their window or enter fullscreen. What letterboxing does is that it masks the real dimensions of the browser window while keeping the window width and height dimensions multiples of 200px and 100px during the resize operation. And, then it adds a gray space at the top, bottom, left, or right of the current page. The advertising code tracking the window resize events reads the flawed dimensions and sends it to its server, and only then Firefox removes the gray spaces. This is how the advertising code is tricked into reading the incorrect window dimensions. Here is a demo of letterboxing showing how exactly it works: https://www.youtube.com/watch?&v=TQxuuFTgz7M The letterboxing feature is not enabled by default. To enable the feature, you can go to the ‘about:config’ page in the browser, enter “privacy.resistFingerprinting" in the search box, and toggle the browser's anti-fingerprinting features to "true." To know more in detail about letterboxing, check out ZDNet’s website. Mozilla engineer shares the implications of rewriting browser internals in Rust Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant
Read more
  • 0
  • 0
  • 19158

article-image-google-app-engine-standard-environment-beta-now-includes-php-7-2
Savia Lobo
23 Aug 2018
2 min read
Save for later

Google App Engine standard environment (beta) now includes PHP 7.2

Savia Lobo
23 Aug 2018
2 min read
Google Cloud announced the availability of their latest Second Generation runtime, PHP 7.2 on the App Engine standard environment, on Monday. This version is available in beta for users to build and deploy reliable applications with improved flexibility. PHP 7.2 is open and idiomatic as compared to other second Generation runtimes on App Engine standard such as Python 3.7 and Node.js 8. This means one can run popular frameworks such as Symfony, Laravel, and even WordPress on PHP 7.2. With PHP 7.2 on the App Engine standard environment, users can easily build and deploy an application, which can run reliably under heavy load and large amounts of data. The applications will run within its own secure, reliable environment. Thus, making it independent of the hardware, operating system, or the physical location of the server. Benefits of Google App Engine standard environment for PHP 7.2 Faster auto-scaling: Being on the Google App Engine standard environment allows running instances in seconds. This allows the app to handle sudden bursts in demand. Faster deployment times of about less than a minute for PHP apps; One can also scale apps down to zero instances if required. This makes it perfect for apps to operate at any scale. No restrictions in running code: As PHP 7.2 is a Second Generation runtime, one can run any code without restrictions. Existing PHP apps and open source libraries will run unmodified. Support for new languages: This is because PHP 7.2 need not custom-modify language runtimes to work with App Engine. Thus, support for new languages can be launched quickly. Supports Google Cloud client libraries: One can integrate Google Cloud services into their apps and run it on App Engine, Compute Engine, or any other platform. To know more about this news in detail and to get started with PHP 7.2 for App Engine visit Google Cloud blog. Common PHP Scenarios Oracle releases GraphPipe: An open source tool that standardizes machine learning model deployment Perform CRUD operations on MongoDB with PHP
Read more
  • 0
  • 1
  • 18019
article-image-django-2-2-is-now-out-with-classes-for-custom-database-constraints
Bhagyashree R
02 Apr 2019
2 min read
Save for later

Django 2.2 is now out with classes for custom database constraints

Bhagyashree R
02 Apr 2019
2 min read
Yesterday, the Django team announced the release of Django 2.2. This release comes with classes for custom database constraints, Watchman compatibility for runserver, and more. It comes with support for Python 3.5, 3.6, and 3.7. As this version is a long-term support (LTS) release it will receive security and data loss updates for at least the next three years. Also, this release marks the end of the mainstream support for Django 2.1 and it will continue to receive security and data loss fixes until December 2019. Following are some of the updates Django 2.2 comes with: Classes for custom database constraints Two new classes are introduced to create custom database constraints: CheckConstraint and UniqueConstraint. You can add constraints to the models using the 'Meta.constraints' option. Watchman compatibility for runserver This release comes with Watchman compatibility for runserver replacing Pyinotify. Watchman is a service used to watch files and record when they change and also trigger actions when matching files change. Simple access to request headers Django 2.2 comes with HttpRequest.headers to allow simple access to a request’s headers. It provides a case insensitive, dict-like object for accessing all HTTP-prefixed headers from the request. Each header name is stylized with title-casing when it is displayed, for example, User-Agent. Deserialization using natural keys and forward references To perform deserialization you can now use natural keys containing forward references by passing ‘handle_forward_references=True’ to ‘serializers.deserialize()’. In addition to this, forward references are automatically handled by ‘loaddata’. Some backward incompatible changes and deprecations Starting from this release, admin actions are not collected from base ModelAdmin classes. Support is dropped for Geospatial Data Abstraction Library (GDAL) 1.9 and 1.10. Now, the team has made sqlparse a required dependency to simplify Django’s database handling. Permissions for proxy models are now created using the content type of the proxy model. With this release, model Meta.ordering will not affect GROUP By queries such as  .annotate().values(). Now, a deprecation warning will be shown with the advice to add an order_by() to retain the current query. To read the entire list of updates, visit Django’s official website. Django 2.2 alpha 1.0 is now out with constraints classes, and more! Django is revamping its governance model, plans to dissolve Django Core team Django 2.1.2 fixes major security flaw that reveals password hash to “view only” admin users
Read more
  • 0
  • 0
  • 17957

article-image-firefox-releases-v66-0-4-and-60-6-2-to-fix-the-expired-certificate-problem-that-ended-up-disabling-add-ons
Bhagyashree R
06 May 2019
3 min read
Save for later

Firefox releases v66.0.4 and 60.6.2 to fix the expired certificate problem that ended up disabling add-ons

Bhagyashree R
06 May 2019
3 min read
Last week on Friday, Firefox users were left infuriated when all their extensions were abruptly disabled. Fortunately, Mozilla has fixed this issue in their yesterday’s releases, Firefox 66.0.4 and Firefox 60.6.2. https://twitter.com/mozamo/status/1124484255159971840 This is not the first time when Firefox users have encountered such type of problems. A similar issue was reported back in 2016 and it seems that they did not take proper steps to prevent the issue from recurring. https://twitter.com/Theobromia/status/1124791924626313216 Multiple users were reporting that all add-ons were disabled on Firefox because of failed verification. Users were also unable to download any new add-ons and were shown  "Download failed. Please check your connection" error despite having a working connection. This happened because the certificate with which the add-ons were signed expired. The timestamp mentioned in the certificates were: Not Before: May 4 00:09:46 2017 GMT Not After : May 4 00:09:46 2019 GMT Mozilla did share a temporary hotfix (“hotfix-update-xpi-signing-intermediate-bug-1548973”) before releasing a product with the issue permanently fixed. https://twitter.com/mozamo/status/1124627930301255680 To apply this hotfix automatically, users need to enable Studies, a feature through which Mozilla tries out new features before they release to the general users. The Studies feature is enabled by default, but if you have previously opted out of it, you can enable it by navigating to Options | Privacy & Security | Allow Firefox to install and run studies. https://twitter.com/mozamo/status/1124731439809830912 Mozilla released Firefox 66.0.4 for desktop and Android users and Firefox 60.6.2 for ESR (Extended Support Release) users yesterday with a permanent fix to this issue. These releases repair the certificate to re-enable web extensions that were disabled because of the issue. There are still some issues that need to be resolved, which Mozilla is currently working on: A few add-ons may appear unsupported or not appear in 'about:addons'. Mozilla assures that the add-ons data will not be lost as it is stored locally and can be recovered by re-installing the add-ons. Themes will not be re-enabled and will switch back to default. If a user’s home page or search settings are customized by an add-on it will be reset to default. Users might see that Multi-Account Containers and Facebook Container are reset to their default state. Containers is a functionality that allows you to segregate your browsing activities within different profiles. As an aftereffect of this certificate issue, data that might be lost include the configuration data regarding which containers to enable or disable, container names, and icons. Many users depend on Firefox’s extensibility property to get their work done and it is obvious that this issue has left many users sour. “This is pretty bad for Firefox. I wonder how much people straight up & left for Chrome as a result of it,” a user commented on Hacker News. Read the Mozilla Add-ons Blog for more details. Mozilla’s updated policies will ban extensions with obfuscated code Mozilla re-launches Project Things as WebThings, an open platform for monitoring and controlling devices Mozilla introduces Pyodide, a Python data science stack compiled to WebAssembly
Read more
  • 0
  • 0
  • 17678
Modal Close icon
Modal Close icon