Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Web Development

354 Articles
article-image-angular-cli-8-3-0-releases-with-a-new-deploy-command-faster-production-builds-and-more
Bhagyashree R
26 Aug 2019
3 min read
Save for later

Angular CLI 8.3.0 releases with a new deploy command, faster production builds, and more

Bhagyashree R
26 Aug 2019
3 min read
Last week, the Angular team announced the release of Angular CLI 3.8.0. Along with a redesigned website, this release comes with a new deploy command and improves previously introduced differential loading. https://twitter.com/angular/status/1164653064898277378 Key updates in Angular CLI 8.3.0 Deploy directly from CLI to a cloud platform with the new deploy command Starting from Angular CLI 8.3.0, you have a new deploy command to execute the deploy CLI builder associated with your project. It is essentially a simple alias to ng run MY_PROJECT:deploy. There are many third-party builders that implement deployment capabilities to different platforms that you can add to your project with ng add [package name]. After this package with the deployment capability is added, your project’s angular.json file is automatically updated with a deploy section. You can then simply deploy your project by executing the ng deploy command. Currently, the deploy command supports deployment to Firebase, Azure, Zeit, Netlify, and GitHub. You can also create a builder yourself to use the ng deploy command in case you are deploying to a self-managed server or there’s no builder for the cloud platform you are using. Improved differential loading Angular CLI 8.0 introduced the concept of differential loading to maximize browser compatibility of your web application. Most of the modern browsers today support ES2015, but there might be cases when your app users have a browser that doesn’t. To target a wide range of browsers, you can use polyfill scripts for the browsers. You can ship a single bundle containing all your compiled code and any polyfills that may be needed. However, this increased bundle size shouldn’t affect users who have modern browsers. This is where differential loading comes in where the CLI builds two separate bundles as part of your deployed application. The first bundle will target modern browsers, while the second one will target the legacy browser with all necessary polyfills. Though this increases your application’s browser compatibility, the production build ends up taking twice the time. Angular CLI 8.3.0 fixes this by changing how the command runs. Now, the build targeting ES2015 is built first and then it is directly down leveled to ES5, instead of rebuilding the app from scratch. In case you encounter any issue, you can fall back to the previous behavior with NG_BUILD_DIFFERENTIAL_FULL=true ng build --prod. Many Angular developers are excited about the new updates in Angular CLI 8.3.0. https://twitter.com/vikerman/status/1164655906262409216 https://twitter.com/Santosh19742211/status/1164791877356277761 While some did question the usefulness of the deploy command. A developer on Reddit shared their perspective, “Honestly, I think Angular and the CLI are already big and complex enough. Every feature possibly creates bugs and needs to be maintained. While the CLI is incredibly useful and powerful there have been also many issues in the past. On the other hand, I must admit that I can't judge the usefulness of this feature: I've never used Firebase. Is it really so hard to deploy on it? Can't this be done with a couple of lines of a shell script? As already said: One should use CI/CD anyway.” To know more in detail about the new features in Angular CLI 8.3.0, check out the official docs. Also, check out the @angular-schule/ngx-deploy-starter repository to create a new builder for utilizing the deploy command. Angular 8.0 releases with major updates to framework, Angular Material, and the CLI Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0
Read more
  • 0
  • 0
  • 23041

article-image-google-chrome-announces-an-update-on-its-autoplay-policy-and-its-existing-youtube-video-annotations
Natasha Mathur
29 Nov 2018
4 min read
Save for later

Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations

Natasha Mathur
29 Nov 2018
4 min read
Google Chrome team finally announced the release date for its Autoplay Policy, earlier this week. The policy had been delayed when it was released with the Chrome 66 stable release, back in May this year. The latest policy change is scheduled to come out along with Chrome 71, in the upcoming month. The Autoplay policy imposes restrictions that prevent videos and audios from autoplaying in the web browser. For websites that want to be able to autoplay their content, the new policy change will prevent playback by default. For most of the sites, playback will be resumed but a small code adjustment will be required in other cases to resume the audio. Additionally, Google has added a new approach to the policy that includes tracking users' past behavior with the sites that have autoplay enabled. So in case, if a user regularly lets an audio play for more than 7 seconds on a website, the autoplay gets enabled for that website. This is done with the help of a “Media Engagement Index” (MEI) i.e. an index stored locally per Chrome profile on a device. MEI tracks the number of visits to a site that includes audio playback of more than 7 seconds long. Each website gets a score between zero and one in MEI, where higher scores indicate that the user doesn’t mind audio playing on that website. For new user profiles or if a user clears their browsing data, a pre-seed list based on anonymized user aggregated MEI scores is used to track which websites can autoplay. The pre-seeded site list is algorithmically generated and only sites with enough users permitting autoplay on that site are added to the list. “We believe by learning from the user – and anticipating their intention on a per website basis – we can create the best user experience. If users tend to let content play from a website, we will autoplay content from that site in the future. Conversely, if users tend to stop autoplay content from a given website, we will prevent autoplay for that content by default”, mentions the Google team. The reason behind the delay The autoplay policy had been delayed by Google after receiving feedback from the Web Audio developer community, especially the web game developer and WebRTC developers. As per the feedback, the autoplay change was affecting many web games and audio experiences, especially on the sites that had not been updated for the change. Delaying the policy rollout gave web game developers enough time to update their websites. Moreover, Google also explored ways to reduce the negative impact of audio play policy on websites with audio enabled. Following this, Google has made an adjustment to its implementation of Web Audio to reduce the number of websites that had been originally impacted. New adjustments made for the developers As per new adjustments by Google in the autoplay policy, audio will get resumed automatically in case the user has interacted with a page and when the start() method of a source node is called. Source node represents individual audio snippets that most games play. One such example is that of a sound that gets played when a player collects a coin or the background music that plays in a particular stage within a game. Game developers call the start() function on source nodes more often than not in cases whenever any of these sounds are necessary for the game. These changes will enable the autoplay in most web games when the user starts playing the game. Google team has also introduced a mechanism for users that allows them to disable the autoplay policy for cases where the automatic learning doesn’t work as expected. Along with the new autoplay policy update,  Google will also stop showing existing annotations on the YouTube videos to viewers starting from January 15, 2019. All the other existing annotations will be removed. “We always put our users first but we also don’t want to let down the web development community. We believe that with our adjustments to the implementation of the policy, and the additional time we provided for web audio developers to update their code, that we will achieve this balance with Chrome 71”, says the Google team. For more information, check out Google’s official blog post. “ChromeOS is ready for web development” – A talk by Dan Dascalescu at the Chrome Web Summit 2018 Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native Meet Carlo, a web rendering surface for Node applications by the Google Chrome team
Read more
  • 0
  • 0
  • 22679

article-image-googles-v8-javascript-engine-adds-support-for-top-level-await
Fatema Patrawala
25 Sep 2019
3 min read
Save for later

Google’s V8 JavaScript engine adds support for top-level await

Fatema Patrawala
25 Sep 2019
3 min read
Yesterday, Joshua Litt from the Google Chromium team announced to add support for top-level await in V8. V8 is Google’s open source high-performance JavaScript and WebAssembly engine, written in C++. It is used in Chrome and in Node.js, among others. It implements ECMAScript and WebAssembly, and runs on Windows 7 or later, macOS 10.12+, and Linux systems that use x64, IA-32, ARM, or MIPS processors. V8 can run standalone, or can be embedded into any C++ application. The official documentation page on Google Chromium reads, “Adds support for parsing top level await to V8, as well as many tests.This is the final cl in the series to add support for top level await to v8.” Top-level await support will ease running JS script in V8 As per the latest ECMAScript proposal on top-level await allows the await keyword to be used at the top level of the module goal. Top-level await enables modules to act as big async functions: With top-level await, ECMAScript Modules (ESM) can await resources, causing other modules who import them to wait before they start evaluating their body. Earlier developers used IIFE for top-level awaits, a JavaScript function that runs as soon as it is defined. But there are certain limitations in using IIFE, that is with await only available within async functions, a module can include await in the code that executes at startup by factoring that code into an async function. And this pattern will be immediately invoked with IIFE and it is appropriate for situations where loading a module is intended to schedule work that will happen some time later. While Top-level await function lets developers rely on the module system itself to handle all of these, and make sure that things are well-coordinated. Community is really happy to know that top-level support has been added to V8. On Hacker News, one of the users commented, “This is huge! Finally no more need to use IIFE's for top level awaits”. Another user commented, “Top level await does more than remove a main function. If you import modules that use top level await, they will be resolved before the imports finish. To me this is most important in node where it's not uncommon to do async operations during initialization. Currently you either have to export a promise or an async function.” To know more about this read the official Google Chromium documentation page. Other interesting news in web development New memory usage optimizations implemented in V8 Lite can also benefit V8 LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more  
Read more
  • 0
  • 0
  • 22645

article-image-giving-material-angular-io-a-refresh-from-angular-blog-medium
Matthew Emerick
07 Oct 2020
3 min read
Save for later

Giving material.angular.io a refresh from Angular Blog - Medium

Matthew Emerick
07 Oct 2020
3 min read
Hi everyone, I’m Annie and I recently joined the Angular Components team after finishing up my rotations as an Engineering Resident here at Google. During the first rotation of my residency I worked on the Closure Compiler and implemented some new ES2020 features including nullish coalesce and optional chaining. After that, my second rotation project was with the Angular Components where I took on giving material.angular.io a long awaited face lift. If you have recently visited the Angular Materials documentation site you will have noticed some new visual updates. We’ve included new vibrant images on the components page, updates to the homepage, a guides page revamp and so much more! Today I would like to highlight how we generated these fun colorful images. We were inspired by the illustrations on the Material Design components page which had aesthetic abstract designs that represented each component. We wanted to adapt the idea for material.angular.io but had some constraints and requirements to consider. First of all, we didn’t have a dedicated illustrator or designer for the project because of the tight deadline of my residency. Second of all, we wanted the images to be compact but clearly showcase each component and its usage. Finally, we wanted to be able to update these images easily when a component’s appearance changed. For the team the choice became clear: we’re going to need to build something ourselves to solve for these requirements. While weighing our design options, we decided that we preferred a more realistic view of the components instead of abstract representations. This is where we came up with the idea of creating “scenes” for each component and capturing them as they would appear in use. We needed a way to efficiently capture these components. We turned to a technique called screenshot testing. Screenshot testing is a technique that captures an image of the page of the provided url and compares it to an expected image. Using this technique we were able to generate the scenes for all 35 components. Here’s how we did it: Set up a route for each component that contains a “scene” using the actual material component Create an end-to-end testing environment and take screenshots of each route with protractor Save the screenshots instead of comparing them to an expected image Load the screenshots from the site One of the benefits of our approach is that whenever we update a component, we can just take new screenshots. This process saves incredible amounts of time and effort. To create each of the scenes we held a mini hackathon to come up with fun ideas! For example, for the button component (top) we wanted to showcase all the different types and styles of buttons available (icon, FAB, raised, etc.). For the button toggle component (bottom) we wanted to show the toggle in both states in a realistic scenario where someone might use a button toggle. Conclusion It was really exciting to see the new site go live with all the changes we made and we hope you enjoy them too! Be sure to check out the site and let us know what your favorite part is! Happy coding, friends! Giving material.angular.io a refresh was originally published in Angular Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read more
  • 0
  • 0
  • 22523

article-image-google-and-mozilla-to-remove-extended-validation-indicators-in-chrome-77-and-firefox-70
Bhagyashree R
13 Aug 2019
4 min read
Save for later

Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70

Bhagyashree R
13 Aug 2019
4 min read
Last year, Apple removed the Extended Validation (EV) certificate indicators from Safari on both iOS 12 and Mojave. Now, Google and Mozilla are following suit by removing the EV visual indicators starting from Chrome 77 and Firefox 70. What are Extended Validation Certificates Introduced in 2007, Extended Validation Certificates are issued to applicants after they are verified as a genuine legal entity by a certificate authority (CA). The baseline requirements for an EV certificate are outlined by the CA/Browser forum. Web browsers show a green address bar when visiting a website that is using EV Certificate. You will see the company name alongside the padlock symbol in the green address bar. These certificates can often be expensive. DigiCert charges $344 USD per year, Symantec prices its EV certificate at $995 USD a year, and Thawte at $299 USD a year. Why Chrome and Firefox are removing EV indicators In a survey conducted by Google, users of the Chrome and Safari browsers were asked how much they trusted a website with and without EV indicators. The results of the survey showed that browser identity indicators do not have much effect on users’ secure choices. About 85 percent of users did not find anything strange about a Google login page with the fake URL “accounts.google.com.amp.tinyurl.com”. Seeing these results and prior academic work, the Google Security US team concluded that positive security indicators are largely ineffective. “As part of a series of data-driven changes to Chrome’s security indicators, the Chrome Security UX team is announcing a change to the Extended Validation (EV) certificate indicator on certain websites starting in Chrome 77,” the team wrote in a Google group. Another reason behind this decision was that the EV indicators takes up valuable screen space. Starting with Chrome 77,  the information related to EV indicators will be shown in Page Info that appears when the lock icon is clicked instead of the EV badge: Source: Google Citing similar reasons, the team behind Firefox shared their intention to remove EV indicators from Firefox 70 for desktop yesterday. They also plan to add this information to the identity panel instead of showing it on the identity block. “The effectiveness of EV has been called into question numerous times over the last few years, there are serious doubts whether users notice the absence of positive security indicators and proof of concepts have been pitting EV against domains for phishing,” the team wrote. Many CAs market EV certificates as something that builds visitor confidence, protects them against phishing, and identity fraud. Looking at these advancements, Troy Hunt, a web security expert and the creator of “Have I Been Pwned?” concluded that EV certificates are now dead. In a blog post, he questioned the CAs, “how long will it take the CAs selling EV to adjust their marketing to align with reality?” Users have mixed feelings about this change. “Good riddance, IMO. They never meant much, to begin with, the validation procedures were basically "can you pay the fee?", and they only added to user confusion,” a user said on Hacker News. Many users believe that EV indicators are valuable for financial transactions. A user commented on Reddit, “As a financial institution it was always much easier to just say "make sure it says <Bank name> in the URL bar and it's green" when having a customer on the phone than "Please click on settings -> advanced settings -> security -> display certificate and check the value subject".” To know more, check out the official announcements by Chrome and Firefox teams. Google Chrome to simplify URLs by hiding special-case subdomains Flutter gets new set of lint rules to build better Chrome OS apps Mozilla releases WebThings Gateway 0.9 experimental builds targeting Turris Omnia and Raspberry Pi 4  
Read more
  • 0
  • 0
  • 22452

article-image-ionic-react-released-ionic-framework-pivots-from-angular-to-a-native-react-version
Sugandha Lahoti
15 Oct 2019
3 min read
Save for later

Ionic React released; Ionic Framework pivots from Angular to a native React version

Sugandha Lahoti
15 Oct 2019
3 min read
Yesterday the team behind the Ionic Framework announced the general availability of Ionic React, which is a native React version of Ionic Framework, pivoting from its traditional Angular-focused app framework. “Ionic React makes it easy to build apps for iOS, Android, Desktop, and the web as a Progressive Web App”, states the team in a blog post. It uses Typescript and combines core Ionic experience with the tooling and APIs that are tailored to developers. It is a fully-supported, enterprise-ready offering with services, advisory, tooling, and supported native functionality. @ionic/react projects will work like standard React projects, leveraging react-dom and with setup normally found in a Create React App (CRA) app. For routing and navigation, React Router is used under the hood. One difference is the usage of TypeScript, which provides a more productive experience. To use plain JavaScript, you can rename files to use a .js extension then remove any of the type annotations with each file. Explaining the reason behind choosing React, the team says, “With Ionic, we envisioned being able to build rich JavaScript-powered controls and distribute them as simple HTML tags any web developer could assemble into an awesome app. We realized that building a version of the Ionic Framework for React made perfect sense. Combined with the fact that we had several React fans join the Ionic team over the years, there was a strong desire internally to see Ionic Framework support React as well.” How is Ionic React different from React Native The team realized that there was a gap in the React ecosystem that Ionic could fill as an easier mobile and Progressive Web App development solution. Developers were also interested in incorporating it in their existing React Native apps, by building more screens in their app out of a native WebView frame. There were two major reasons why the Ionic team built @ionic/react. First, it is DOM-native and uses the standard react-dom library. In contrast, React Native builds an abstraction on top of iOS and Android native UI controls. The team states, “When we looked at installs for react-dom compared to react-native, it was clear to us that vastly more React development was happening in the browser and on top of the DOM than on top of the native iOS or Android UI systems” Secondly, Ionic is one of the most popular frameworks for building PWA, most notably the Stencil project. React Native, on the other hand, does not officially support Progressive web apps. PWAs are, at best, an afterthought in the React Native ecosystem. @ionic/react has been well appreciated by developers on Twitter. https://twitter.com/dipakcreation/status/1183974237125693441 https://twitter.com/MichaelW_PWC/status/1183836080170323968 https://twitter.com/planetoftheweb/status/1183809368934043653 You can go through Ionic’s blog for additional information and for getting started. Ionic React RC is now out! Ionic 4.1 named Hydrogen is out! React Native Vs Ionic: Which one is the better mobile app development framework?
Read more
  • 0
  • 0
  • 22402
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-safari-technology-preview-91-gets-beta-support-for-the-webgpu-javascript-api-and-wsl
Bhagyashree R
13 Sep 2019
3 min read
Save for later

Safari Technology Preview 91 gets beta support for the WebGPU JavaScript API and WSL

Bhagyashree R
13 Sep 2019
3 min read
Yesterday, Apple announced that Safari Technology Preview 91 now supports the beta version of the new WebGPU graphics API and its shading language, Web Shading Language (WSL). You can enable the WebGPU beta support by selecting Experimental Features > WebGPU in the Developer menu. The WebGPU JavaScript API WebGPU is a new graphics API for the web that aims to provide "modern 3D graphics and computation capabilities.” It is a successor to WebGL, a JavaScript API that enables 3D and 2D graphics rendering within any compatible browser without the need for a plug-in. It is being developed in the W3C GPU for the Web Community Group with engineers from Apple, Mozilla, Microsoft, Google, and others. Read also: WebGL 2.0: What you need to know Comparing WebGPU and WebGL WebGPU is different from WebGL in the respect that it is not a direct port of any existing native API, but a similarity between the two is that they both are accessed through JavaScript. However, the team does have plans to make it accessible through WebAssembly as well in the future. In WebGL, rendering a single object requires writing a series of state-changing calls. On the other hand, WebGPU combines all the state-changing calls into a single object named pipeline state object. It validates the state after the pipeline is created to prevent expensive state analysis inside the draw call. Also, wrapping an entire pipeline state in a single function call reduces the number of exchanges between Javascript and WebKit’s C++ browser engine. Similarly, resources in WebGL are bound one-by-one, while WebGPU batches them up into bind groups. The team explains, “In both of these examples, multiple objects are gathered up together and baked into a hardware-dependent format, which is when the browser performs validation. Being able to separate object validation from object use means the application author has more control over when expensive operations occur in the lifecycle of their application.” The main focus area of WebGPU is to provide improved performance and ease of use as compared to WebGL. The team compared the performance of the two using the 2D graphics benchmark, MotionMark. The performance test they wrote measured how many triangles each with different properties were rendered while maintaining 60 frames per second. Each triangle was rendered with a different draw call and bind group. WebGPU showed a substantially better performance than WebGL: Source: Apple WHLSL is now renamed to WSL In November last year, Apple proposed a new shading language for WebGPU named Web High-Level Shading Language (WHLSL), which was source-compatible with HLSL. After receiving the community feedback, they updated the language to be compatible with OpenGL Shading Language (GLSL), which is a pretty commonly used language among the web developers. Apple renamed this version of the language to Web Shading Language (WSL) and describes it as “simple, low-level, and fast to compile.” Read also: Introducing Web High Level Shading Language (WHLSL): A graphics shading language for WebGPU “There are many Web developers using GLSL today in WebGL, so a potential browser accepting a different high-level language, like HLSL, wouldn’t suit their needs well. In addition, a high-level language such as HLSL can’t be executed faithfully on every platform and graphics API that WebGPU is designed to execute on,” the team wrote. Check out the official announcement by Apple to know more in detail. Other news in web Firefox 69 allows default blocking of third-party tracking cookies and cryptomining for all users New memory usage optimizations implemented in V8 Lite can also benefit V8 Laravel 6.0 releases with Laravel vapor compatibility, LazyCollection, improved authorization response and more  
Read more
  • 0
  • 0
  • 22165

article-image-amazon-transcribe-streaming-announces-support-for-websockets
Savia Lobo
29 Jul 2019
3 min read
Save for later

Amazon Transcribe Streaming announces support for WebSockets

Savia Lobo
29 Jul 2019
3 min read
Last week, Amazon announced that its automatic speech recognition (ASR) service, Amazon Transcribe, now supports WebSockets. According to Amazon, “WebSocket support opens Amazon Transcribe Streaming up to a wider audience and makes integrations easier for customers that might have existing WebSocket-based integrations or knowledge”. Amazon Transcribe allows developers to add speech-to-text capability to their applications easily with its ASR service. Amazon announced the general availability of Amazon Transcribe in the AWS San Francisco Summit 2018. With Amazon Transcribe API, users can analyze audio files stored in Amazon S3 and have the service return a text file of the transcribed speech. Real-time transcripts from a live audio stream are also possible with the Transcribe API. Until now, the Amazon Transcribe Streaming API has been available using HTTP/2 streaming. However, Amazon adds the new WebSockets support as another integration option for bringing real-time voice capabilities to different projects built using Transcribe. What are WebSockets? WebSockets are a protocol built atop TCP, similar to HTTP. HTTP is excellent for short-lived requests, however, it does not handle persistent real-time communications well. Due to this, the first Amazon Transcribe Streaming API made available uses HTTP/2 streams that solve a lot of the issues that HTTP had with real-time communications. Amazon states, “an HTTP connection is normally closed at the end of the message, a WebSocket connection remains open”. With this advantage, messages can be sent bi-directionally with no bandwidth or latency added by handshaking and negotiating a connection. WebSocket connections are full-duplex, which means that the server and client can both transmit data to and fro at the same time. WebSockets were also designed “for cross-domain usage, so there’s no messing around with cross-origin resource sharing (CORS) as there is with HTTP”. Amazon Transcribe Streaming using Websockets While using the WebSocket protocol to stream audio, Amazon Transcribe transcribes the stream in real-time. When a user encodes the audio with event stream encoding, Amazon Transcribe responds with a JSON structure, which is also encoded using event stream encoding. Key components of a WebSocket request to Amazon Transcribe are: Creating a pre-signed URL to access Amazon Transcribe. Creating binary WebSocket frames containing event stream encoded audio data. Handling WebSocket frames in the response. The different languages that Amazon Transcribe currently supports during real-time transcription include British English (en-GB), US English (en-US), French (fr-FR), Canadian French (fr-CA), and US Spanish (es-US). To know more about WebSockets API in detail, visit Amazon’s official post. Understanding WebSockets and Server-sent Events in Detail Implementing a non-blocking cross-service communication with WebClient[Tutorial] Introducing Kweb: A Kotlin library for building rich web applications
Read more
  • 0
  • 0
  • 22035

article-image-introducing-wasmjit-a-kernel-mode-webassembly-runtime-for-linux
Bhagyashree R
26 Sep 2018
2 min read
Save for later

Introducing Wasmjit: A kernel mode WebAssembly runtime for Linux

Bhagyashree R
26 Sep 2018
2 min read
Written in C90, Wasmjit is a small embeddable WebAssembly runtime. It is portable to most environments but it primarily targets a Linux kernel module that can host Emscripten-generated WebAssembly modules. What are the benefits of Wasmjit? Improved performance: Using Wasmjit you will be able to run WebAssembly modules in kernel-space (ring 0). This will provide access to system calls as normal function calls, which eliminates the user-kernel transition overhead. This also avoids the scheduling overheads from swapping page tables. This provides a boost in performance for syscall-bound programs like web servers or FUSE file systems. No need to run an entire browser: It also comes with a host environment for running in user-space on POSIX systems. This will allow running WebAssembly modules without having to run an entire browser. What tools do you need to get started? The following are the tools you require to get started with Wasmjit: A standard POSIX C development environment with cc and make Emscripten SDK Optionally, you can install kernel headers on Linux, the linux-headers-amd64 package on Debian, kernel-devel on Fedora What’s in the future? Wasmjit currently supports x86_64 and can run a subset of Emscripten-generated WebAssembly on Linux, macOS, and within the Linux kernel as a kernel module. In coming releases we will see more implementations and improvements along the following lines: Enough Emscripten host-bindings to run nginx.wasm Introduction of an interpreter Rust-runtime for Rust-generated wasm files Go-runtime for Go-generated wasm files Optimized x86_64 JIT arm64 JIT macOS kernel module What to consider when using this runtime? Wasmjit uses vmalloc(), a function for allocating a contiguous memory region in the virtual address space, for code and data section allocations. This prevents those pages from ever being swapped to disk resulting in indiscriminate access to the /dev/wasm device. This can make a system vulnerable to denial-of-service attacks. To mitigate this risk, in future, a system-wide limit on the amount of memory used by the /dev/wasm device will be provided. To get started with Wasmjit, check out its GitHub repository. Why is everyone going crazy over WebAssembly? Unity Benchmark report approves WebAssembly load times and performance in popular web browsers Golang 1.11 is here with modules and experimental WebAssembly port among other updates
Read more
  • 0
  • 0
  • 22008

article-image-introducing-stateful-functions-an-os-framework-to-easily-build-and-orchestrate-distributed-stateful-apps-by-apache-flinks-ververica
Vincy Davis
19 Oct 2019
3 min read
Save for later

Introducing Stateful Functions, an OS framework to easily build and orchestrate distributed stateful apps, by Apache Flink’s Ververica

Vincy Davis
19 Oct 2019
3 min read
Last week, Apache Flink’s stream processing company Ververica announced the launch of Stateful Functions. It is an open source framework developed to reduce the complexity of building and orchestrating distributed stateful applications. It is built with an aim to bring together the benefits of stream processing with Apache Flink and Function-as-a-Service (FaaS). Ververica will propose the project, licensed under Apache 2.0, to the Apache Flink community as an open source contribution. Read More: Apache Flink 1.9.0 releases with Fine-grained batch recovery, State Processor API and more The co-founder and CTO at Ververica, Stephan Ewen says, “Orchestration for stateless compute has come a long way, driven by technologies like Kubernetes and FaaS — but most offerings still fall short for stateful distributed applications.” He further adds, “Stateful Functions is a big step towards addressing those shortcomings, bringing the seamless state management and consistency from modern stream processing to space.” Stateful Functions is designed as a simple and powerful abstraction based on functions that can interact with each other asynchronously. It is also composed of complex networks of functionality. This approach helps in eliminating the requirement of additional infrastructure for application state management, reduces operational overhead and also the overall system complexity. The stateful functions are aimed to help users define independent functions with a small footprint, thus enabling the resources to interact reliably with each other. Each function has a persistent user-defined state in local variables that can be used to arbitrarily message other functions. The stateful function framework simplifies use cases such as: Asynchronous application processes (checkout, payment, logistics) Heterogeneous, load-varying event stream pipelines (IoT event rule pipelines) Real-time context and statistics (ML feature assembly, recommenders) The runtime of stateful functions API is based on the stream processing capability of Apache Flink. It also extends its powerful model for state management and fault tolerance. The major advantage of this framework is that the state and computation are co-located on the same side of the network. This means that “you don’t need the round-tripper record to fetch state from an external storage system nor a specific state management pattern for consistency.”  Though the stateful functions API is independent of Flink, its runtime is built on top of Flink’s DataStream API and uses a lightweight version of process functions. “The core advantage here, compared to vanilla Flink, is that functions can arbitrarily send events to all other functions, rather than only downstream in a DAG,” stated the official blog. Image source: Ververica blog As shown in the above figure, the applications of stateful functions include the multiple bundles of functions that are multiplexed into a single Flink application. This enables them to interact consistently and reliably with each other. This enables the many small jobs to share the same pool of resources and tackle them as needed. Many Twitterati are excited about this announcement. https://twitter.com/sijieg/status/1181518992541933568 https://twitter.com/PasqualeVazzana/status/1182033530269949952 https://twitter.com/acmurthy/status/1181574696451620865 Head over to the stateful functions website to know more details. OpenBSD 6.6 comes with GCC disabled in base for ARMv7 and i386, SMP Improvements, and more Developers ask for an option to disable Docker Compose from automatically reading the .env file Ubuntu 19.10 releases with MicroK8s add-ons, GNOME 3.34, ZFS on root, NVIDIA-specific improvements, and much more! Swift shares diagnostic architecture improvements that will be part of the Swift 5.2 release Microsoft launches Open Application Model (OAM) and Dapr to ease developments in Kubernetes and microservices
Read more
  • 0
  • 0
  • 21985
article-image-firefox-71-released-with-new-developer-tools-features
Savia Lobo
04 Dec 2019
5 min read
Save for later

Firefox 71 released with new developer tools features

Savia Lobo
04 Dec 2019
5 min read
Yesterday, the Firefox team announced its latest version, Firefox 71. This version includes a plethora of new developer tools features such as web socket message inspector, console multi-line editor mode, log on events, and network panel full-text search. Many of these features were first made available in the Firefox Developer Edition and later improved based on the feedback. Other highlights in Firefox 71 includes new web platform features such as CSS subgrid, column-span, Promise.allSettled, and the Media Session API. What’s new in Firefox 71? Improvements in speed and reliability In Firefox 71, the team took some help from the JavaScript team by improving the caching of scripts during a startup. This made both Firefox and DevTools start faster. “One Console test got an astonishing 40% improvement while times across every panel were boosted by 8-15%”, the official blog post mentions. Also, the links to scripts, for example, from the event handler tooltip in the Inspector or the stack traces in the Console, reliably gets you to the expected line and debugging sources loaded through eval() now also works as expected. WebSocket Message Inspector In Firefox 71, the Network panel has a new Messages tab. You can observe all messages sent and received through a WebSocket connection: Source: Mozilla Hacks Sent frames have a green up-arrow icon, while received frames have a red down-arrow icon. You can click on an individual frame to view its formatted data. Know more about WebSocket Message Inspector on the official post. Console multi-line editor mode Another developer tools feature in Firefox 71 is the new multi-line console. It combines the benefits of IDEs to authoring code with the workflow of repeatedly executing code in the context of the page. If you open the regular console, you’ll see a new icon at the end of the prompt row. Source: Mozilla Hacks Clicking this will switch the console to multi-line mode: Source: Mozilla Hacks Here you can enter multiple lines of code, pressing enter after each one, and then run the code using Ctrl + Enter. You can also move between statements using the next and previous arrows. The editor includes regular IDE features you’d expect, such as open/close bracket pair highlighting and automatic indentation. Inline variable preview in Debugger The JavaScript Debugger now provides inline variable previewing, which is a useful timesaver when stepping through your code. In previous versions, you had to scroll through the scope panel to find variable values or hover over a variable in the source pane. In the current version, when execution pauses, you can view relevant variable and property values directly in the source. Source: Mozilla Hacks Using the babel-powered source mapping, preview also works for variables that have been renamed or minified by build steps. Make sure to enable this power-feature by checking Map in the Scopes pane. Log on Event Listeners There have been a few updates in the event listener breakpoints in Firefox 71. A few improvements include, log on events lets you explore which event handlers are being fired in which order without the need for pausing and stepping. Hence, if we choose to log keyboard events, for example, the code no longer pauses as each event is fired: Source: Mozilla Hacks Instead, we can then switch to the console, and whenever we press a key we are given a log of where related events were fired. CSS improvements In Firefox 71, the new CSS includes subgrid, multicol, clip-path: path, and aspect ratio mapping. Subgrid A feature that has been enabled in 71 after being supported behind a pref for a while, the subgrid value of grid-template-columns and grid-template-rows allows you to create a nested grid inside a grid item that will use the main grid’s tracks. This means that grid items inside the subgrid will line up with the parent’s grid tracks, making various layout techniques much easier. Multicol — column-span CSS multicol support has moved forward in a big way with the inclusion of the column-span property in Firefox 71. This allows you to make an element span across all the columns in a multicol container (generated using column-width or column-count). Clip-path: path() The path() value of the clip-path property is now enabled by default — this allows you to create a custom mask shape using a path() function, as opposed to a predefined shape like a circle or ellipse. Aspect ratio mapping Finally, the height and width HTML attributes on the <img> element are now mapped to an internal aspect-ratio property. This allows the browser to calculate the image’s aspect ratio early on and correct its display size before it has loaded if CSS has been applied that causes problems with the display size. There are also a few minor JavaScript changes in this release including, Promise.allSettled(), the Media Session API, and WebGL multiview. A lot of users are excited about this release and are looking forward to trying it out. https://twitter.com/IshSookun/status/1201897724943036417 https://twitter.com/awhite/status/1202163413021077504 To know more about this news in detail, read Firefox 71 official announcement. The new WebSocket Inspector will be released in Firefox 71 Firefox 70 released with better security, CSS, and JavaScript improvements Google and Mozilla to remove Extended Validation indicators in Chrome 77 and Firefox 70
Read more
  • 0
  • 0
  • 21941

article-image-haproxy-2-0-released-with-kubernetes-ingress-controller-layer-7-retries-polyglot-extensibility-grpc-support-and-more
Vincy Davis
17 Jun 2019
6 min read
Save for later

HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more

Vincy Davis
17 Jun 2019
6 min read
Last week, HAProxy 2.0 was released with critical features of cloud-native and containerized environments. This is an LTS (Long-term support) release, which includes a powerful set of core features such as Layer 7 retries, Cloud-Native threading and logging, polyglot extensibility, gRPC support and more, and will improve the seamless support for integration into modern architectures. In conjunction with this release, the HAProxy team has also introduced the HAProxy Kubernetes Ingress Controller and the HAProxy Data Plane API. The founder of HAProxy Technologies, Willy Tarreau, has said that these developments will come with HAProxy 2.1 version. The HAProxy project has also opened up issue submissions on its HAProxy GitHub account. Some features of HAProxy 2.0 Cloud-Native Threading and Logging HAProxy can now scale to accommodate any environment with less manual configuration. This will enable the number of worker threads to match the machine’s number of available CPU cores. The process setting is no longer required, thus simplifying the bind line. Two new build parameters have been added: MAX_THREADS and MAX_PROCS, which avoids allocating huge structs. Logging has been made easier for containerized environments. Direct logging to stdout and stderr, or to a file descriptor is now possible. Kubernetes Ingress Controller The HAProxy Kubernetes Ingress Controller provides a high-performance ingress for the Kubernetes-hosted applications. It supports TLS offloading, Layer 7 routing, rate limiting, whitelisting. Ingresses can be configured through either ConfigMap resources or annotations. The Ingress Controller gives users the ability to : Use only one IP address and port and direct requests to the correct pod based on the Host header and request path Secure communication with built-in SSL termination Apply rate limits for clients while optionally whitelisting IP addresses Select from among any of HAProxy's load-balancing algorithms Get superior Layer 7 observability with the HAProxy Stats page and Prometheus metrics Set maximum connection limits to backend servers to prevent overloading services Layer 7 Retries With HAProxy 2.0, it will be possible to retry from another server at Layer 7 for failed HTTP requests. The new configuration directive, retry-on, can be used in defaults, listen, or backend section. The number of attempts at retrying can be specified using the retries directive. The full list of retry-on options is given on the HAProxy blog. HAProxy 2.0 also introduces a new http-request action called disable-l7-retry. It allows the user to disable any attempt to retry the request if it fails for any reason other than a connection failure. This can be useful to make sure that POST requests aren’t retried. Polyglot Extensibility The Stream Processing Offload Engine (SPOE) and Stream Processing Offload Protocol (SPOP) were introduced in HAProxy 1.7. It aimed to create the extension points necessary to build upon HAProxy using any programming language. From HAProxy 2.0, the following libraries and examples will be available in the following languages and platforms: C .NET Core Golang Lua Python gRPC HAProxy 2.0 delivers full support for the open-source RPC framework, gRPC. This allows bidirectional streaming of data, detection of gRPC messages, and logging gRPC traffic. Two new converters, protobuf and ungrpc, have been introduced, to extract the raw Protocol Buffer messages. Using Protocol Buffers, gRPC enables users to serialize messages into a binary format that’s compact and potentially more efficient than JSON. Users need to set up a standard end-to-end HTTP/2 configuration, to start using gRPC in HAProxy. HTTP Representation (HTX) The Native HTTP Representation (HTX) was introduced with HAProxy 1.9. Starting from 2.0, it will be enabled by default. HTX creates strongly typed, well-delineated header fields and allows for gaps and out-of-order fields. It also allows HAProxy to maintain consistent semantics from end-to-end and provides higher performance when translating HTTP/2 to HTTP/1.1 or vice versa. LTS Support for 1.9 Features HAProxy 2.0 bring LTS support for many features that were introduced or improved upon during the 1.9 release. Some are them are specified below: Small Object Cache with an increased caching size up to 2GB, set with the max-object-size directive. The total-max-size setting determines the total size of the cache and can be increased up to 4095MB. New fetches like date_us, cpu_calls and more have been included which will report either an internal state or from layer 4, 5, 6, and 7. New converters like strcmp, concat and more allow to transform data within HAProxy Server Queue Priority Control, lets the users to prioritize some queued connections over others. This is helpful to deliver JavaScript or CSS files before images. The resolvers section supports using resolv.conf by specifying parse-resolv-conf. The HAProxy team has planned to build HAProxy 2.1 with features like UDP Support, OpenTracing and Dynamic SSL Certificate Updates. The HAProxy inaugural community conference, HAProxyConf is scheduled to take place in Amsterdam, Netherlands on November 12-13, 2019. A user on Hacker News comments, “HAProxy is probably the best proxy server I had to deal with ever. It's performance is exceptional, it does not interfere with L7 data unless you tell it to and it's extremely straightforward to configure reading the manual.” While some are busy comparing HAProxy with the nginx web server. A user says that “In my previous company we used to use HAProxy, and it was a hassle. Yes, it is powerful. However, nginx is way easier to configure and set up, and performance wise is a contender for most usual applications people needed. nginx just fulfills most people's requirements for reverse proxy and has solid HTTP/2 support (and other features) for way longer.” Another user states that “Big difference is that haproxy did not used to support ssl without using something external like stunnel -- nginx basically did it all out of the box and I haven't had a need for haproxy in quite some time now.” While others suggest that HAProxy is trying hard to stay equipped with the latest features in this release. https://twitter.com/garthk/status/1140366975819849728 A user on Hacker News agrees by saying that “These days I think HAProxy and nginx have grown a lot closer together on capabilities.” Visit the HAProxy blog for more details about HAProxy 2.0. HAProxy introduces stick tables for server persistence, threat detection, and collecting metrics MariaDB announces the release of MariaDB Enterprise Server 10.4 Businesses need to learn how to manage cloud costs to get real value from serverless and machine learning-as-a-service
Read more
  • 0
  • 0
  • 21926

article-image-ghost-3-0-an-open-source-headless-node-js-cms-released-with-jamstack-integration-github-actions-and-more
Savia Lobo
23 Oct 2019
4 min read
Save for later

Ghost 3.0, an open-source headless Node.js CMS, released with JAMStack integration, GitHub Actions, and more!

Savia Lobo
23 Oct 2019
4 min read
Yesterday, the team behind Ghost, an open-source headless Node.js CMS, announced its major version, Ghost 3.0. The new version represents “a total of more than 15,000 commits across almost 300 releases”  Ghost is now used by the likes of Apple, DuckDuckGo, OpenAI, The Stanford Review, Mozilla, Cloudflare, Digital Ocean, and many, others. “To date, Ghost has made $5,000,000 in customer revenue whilst maintaining complete independence and giving away 0% of the business,” the official website highlights. https://twitter.com/Ghost/status/1186613938697338881 What’s new in Ghost 3.0? Ghost on the JAMStack The team has revamped the traditional architecture using JAMStack which makes Ghost a completely decoupled headless CMS. In this way, users can generate a static site and later add dynamic features to make it powerful. The new architecture unlocks content management that is fundamentally built via APIs, webhooks, and frameworks to generate robust modern websites. Continuous theme deployments with GitHub Actions The process of manually making a zip, navigating to Ghost Admin, and uploading an update in the browser can be difficult at times. To deploy Ghost themes to production in a better way, the team decided to combine with Github Actions. This makes it easy to continuously sync custom Ghost themes to live production sites with every new commit. New WordPress migration plugin Earlier versions of Ghost included a very basic WordPress migrator plugin that made it extremely difficult for anyone to move their data between the platforms or have a smooth experience. The new Ghost 3.0 compatible WordPress migration plugin provides a single-button-download of the full WordPress content + image archive in a format that can be dragged and dropped into Ghost's importer. Those who are new and want to explore Ghost 3.0 can create a new site in a few clicks with an unrestricted 14 day free trial as all new sites on Ghost (Pro) are running Ghost 3.0. The team expects users to try out Ghost 3.0 and get back with feedback for the team on the Ghost forum or help out on Github for building the next features with the Ghost team. Ben Thompson’s Stratechery, a subscription-based newsletter featuring in-depth commentary on tech and media news, recently posted an interview with Ghost CEO John O’Nolan. This interview features questions on what Ghost is, where it came from, and much more. Ghost 3.0 has received a positive response from many and also the fact that it is moving towards adopting static site JAMStack approach. A user on Hacker News commented, “In my experience, Ghost has been the no-nonsense blog CMS that has been stable and just worked with very little maintenance. I like that they are now moving towards static site JAMStack approach, driven by APIs rather than the current SSR model. This lets anybody to customise their themes with the language / framework of choice and generating static builds that can be cached for improved loading times.” Another user who is using Ghost for the first time commented, “I've never tried Ghost, although their website always appealed to me (one of the best designed website I know). I've been using WordPress for the past 13 years, for personal and also professional projects, which means the familiarity I've built with building custom themes never drew me towards trying another CMS. But going through this blog post announcement, I saw that Ghost can be used as a headless CMS with frontend frameworks. And since I started using GatsbyJS extensively in the past year, it seems like something that would work _really_ well together. Gonna try it out! And congrats on remaining true to your initial philosophy.” To know more about the other features in detail, read the official blog post. Google partners with WordPress and invests $1.2 million on “an opinionated CMS” called Newspack Verizon sells Tumblr to WordPress parent, Automattic, for allegedly less than $3million, a fraction of its acquisition cost FaunaDB brings its serverless database to Netlify to help developers create apps
Read more
  • 0
  • 0
  • 21866
article-image-typescript-2-9-release-candidate-is-here
Sugandha Lahoti
21 May 2018
2 min read
Save for later

Typescript 2.9 release candidate is here

Sugandha Lahoti
21 May 2018
2 min read
The release candidate for Typescript 2.9 is here! Typescript is an open-source programming language which adds optional static typing to Javascript. Let’s jump into some highlights of Typescript 2.9 RC. Changes to keyof operator TypeScript 2.9 changes the behavior of keyof to factor in both unique symbols as well as numeric literal types. TypeScript’s keyof operator is a useful way to query the property names of an existing type. Before Typescript 2.9, keyof never recognized symbolic keys. With this functionality, mapped object types like Partial, Required, or Readonly can also recognize symbolic and numeric property keys, and no longer drop properties named by symbols. Introduction of new import( ) type syntax One long-running pain-point in TypeScript has been the inability to reference a type in another module, or the type of the module itself, without including an import at the top of the file. With Typescript 2.9, their is a new import(...) type syntax. import types use the same syntax as ECMAScript’s proposed import(...) expressions, and provide a convenient way to reference the type of a module, or the types which a module contains. Trailing commas not allowed on rest parameters This break was added for conformance with ECMAScript, as trailing commas are not allowed to follow rest parameters in the specification. Changes to strictNullChecks Unconstrained type parameters are no longer assignable to object in strictNullChecks. Since generic type parameters can be substituted with any primitive type, this is a precaution TypeScript has added under strictNullChecks. To fix this, you can add a constraint on object. never can no longer be iterated over Values of type never can no longer be iterated over. Users can avoid this behavior by using a type assertion to cast to the type any. The list of entire changes and code files can be found on the Microsoft blog. You can also view the TypeScript roadmap for everything else that’s coming in 2.9 and beyond. How to install and configure TypeScript How to work with classes in Typescript Tools in TypeScript
Read more
  • 0
  • 0
  • 21815

article-image-memory-usage-optimizations-implemented-in-v8-lite-can-benefit-v8
Sugandha Lahoti
13 Sep 2019
4 min read
Save for later

New memory usage optimizations implemented in V8 Lite can also benefit V8

Sugandha Lahoti
13 Sep 2019
4 min read
V8 Lite was released in late 2018 in V8 version 7.3 to dramatically reduce V8’s memory usage. V8 is Google’s open-source JavaScript and WebAssembly engine, written in C++. V8 Lite provides a 22% reduction in typical web page heap size compared to V8 version 7.1 by disabling code optimization, not allocating feedback vectors and performed aging of seldom executed bytecode. Initially, this project was envisioned as a separate Lite mode of V8. However, the team realized that many of the memory optimizations could be used in regular V8 thereby benefiting all users of V8. The team realized that most of the memory savings of Lite mode with none of the performance impact can be achieved by making V8 lazier. They performed Lazy feedback allocation, Lazy source positions, and Bytecode flushing to bring V8 Lite memory optimizations to regular V8. Read also: LLVM WebAssembly backend will soon become Emscripten default backend, V8 announces Lazy allocation of Feedback Vectors The team lazily allocated feedback vectors after a function executes a certain amount of bytecode (currently 1KB). Since most functions aren’t executed very often, they avoid feedback vector allocation in most cases but quickly allocate them where needed, to avoid performance regressions and still allow code to be optimized. One hitch was that lazy allocation of feedback vectors did not allow feedback vectors to form a tree. To address this, they created a new ClosureFeedbackCellArray to maintain this tree, then swap out a function’s ClosureFeedbackCellArray with a full FeedbackVector when it becomes hot. The team says that they, “have enabled lazy feedback allocation in all builds of V8, including Lite mode where the slight regression in memory compared to their original no-feedback allocation approach is more than compensated by the improvement in real-world performance.” Compiling bytecode without collecting source positions Source position tables are generated when compiling bytecode from JavaScript. However, this information is only needed when symbolizing exceptions or performing developer tasks such as debugging. To avoid this waste, bytecode is now compiled without collecting source positions. The source positions are only collected when a stack trace is actually generated. They have also fixed bytecode mismatches and added checks and a stress mode to ensure that eager and lazy compilation of a function always produces consistent outputs. Flush compiled bytecode from functions not executed recently Bytecode compiled from JavaScript source takes up a significant chunk of V8 heap space. Therefore, now compiled bytecode is flushed from functions during garbage collection if they haven’t been executed recently. They also flush feedback vectors associated with the flushed functions. To keep track of the age of a function’s bytecode, they have incremented the age after every major garbage collection, and reset it to zero when the function is executed. Additional memory optimizations Reduce the size of FunctionTemplateInfo objects. The FunctionTemplateInfo object is split such that the rare fields are stored in a side-table which is only allocated on demand if required. The TurboFan optimized code is now deoptimized such that deopt points in optimized code load the deopt id directly before calling into the runtime. Read also: V8 7.5 Beta is now out with WebAssembly implicit caching, bulk memory operations, and more. Result comparison for V8 Lite and V8 Source: V8 blog People on Hacker News appreciated the work done by the team being V8. A comment reads, “Great engineering stuff. I am consistently amazed by the work of V8 team. I hope V8 v7.8 makes it to Node v12 before its LTS release in coming October.” Another says, “At the beginning of the article, they are talking about building a "v8 light" for embedded application purposes, which was pretty exciting to me, then they diverged and focused on memory optimization that's useful for all v8. This is great work, no doubt, but as the most popular and well-tested JavaScript engine, I'd love to see a focus on ease of building and embedding.” https://twitter.com/vpodk/status/1172320685634420737 More details are available on the V8 blog. Other interesting news in Tech Google releases Flutter 1.9 at GDD (Google Developer Days) conference Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Apple’s September 2019 Event: iPhone 11 Pro and Pro Max, Watch Series 5, new iPad, and more.
Read more
  • 0
  • 0
  • 21494
Modal Close icon
Modal Close icon