Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Programming

573 Articles
article-image-microsoft-introduces-remote-development-extensions-to-make-remote-development-easier-on-vs-code
Bhagyashree R
03 May 2019
3 min read
Save for later

Microsoft introduces Remote Development extensions to make remote development easier on VS Code

Bhagyashree R
03 May 2019
3 min read
Yesterday, Microsoft announced the preview of Remote Development extension pack for VS Code to enable developers to use a container, remote machine, or the Windows Subsystem for Linux (WSL) as a full-featured development environment. https://twitter.com/code/status/1124016109076799488 Currently, developers will need to use the Insiders build for remote development until the stable version is available. The Insiders builds are the versions that are shipped daily with latest features and bug fixes. Why these VS Code extensions are needed? Developers often choose containers or remote virtual machines configured with specific development and runtime stacks as their development environment. This is an optimal choice because configuring such development environments locally could be too difficult or sometimes even impossible. Data scientists also require remote environments to do their work efficiently. They build and train data models and to do that they need to analyze large datasets. This demands massive storage and compute service, which a local machine can hardly provide. One option to solve this problem is using Remote Desktop but it can be sometimes laggy. Developers often use Vim and SSH or local tools with file synchronization, but these can also be slow and error-prone. There are browser-based tools that can be used in some scenarios, but they lack the richness and familiarity that desktop tools provide. VS Code Remote Development extensions pack Looking at these challenges, the VS Code team came up with a solution that suggested that VS Code should run in two places at once. One instance will run the developer tools locally and the other will connect to a set of development services running remotely in the context of a physical or virtual machine. Following are three extensions for working with remote workspaces: Remote-WSL Remote - WSL allows you to use WSL as a full development environment directly from VS Code. It runs commands and extensions directly in WSL so developers don’t have to think about pathing issues, binary compatibility, or other cross-OS challenges. With this extension, developers will be able to edit files located in WSL or the mounted Windows filesystem and also run and debug Linux-based applications on Windows. Remote-SSH Remote - SSH allows you to open folders or workspaces hosted on any remote machine, VM, or container with a running SSH server. It directly runs commands and other extensions on the remote machine so you don’t need to have the source code on your local machine. It enables you to use larger, faster, or more specialized hardware than your local machine. You can also quickly switch between different remote development environments and safely make updates. Remote-Containers Remote - Containers allows you to use a Docker container as your development container. It starts or attaches to a development container, which is running a well-defined tool and runtime stack. All your workspace files are copied or cloned into the container, or mounted from the local file system. To configure the development container you can use a ‘devcontainer.json’ file. To read more in detail, visit Microsoft’s official website. Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Microsoft and GitHub employees come together to stand with the 996.ICU repository Microsoft employees raise their voice against the company’s misogynist, sexist and racist acts  
Read more
  • 0
  • 0
  • 20265

article-image-google-announces-the-general-availability-of-a-new-api-for-google-docs
Amrata Joshi
12 Feb 2019
2 min read
Save for later

Google announces the general availability of a new API for Google Docs

Amrata Joshi
12 Feb 2019
2 min read
Yesterday, Google announced the general availability of a new API for Google Docs that will help developers to automate their tasks that users manually do in the company’s online office suite. This API lets users read and write documents programmatically so that users can integrate data from various sources. Since Google Cloud Next 2018, this API has been in developer preview and is now available to all developers. This API lets users automate processes, create documentation in bulk and generate invoices or contracts. With this API, developers can set up processes that manipulate documents. It gives the ability to insert, move, delete, merge and format text, insert inline images and work with lists. Zapier, Netflix, Mailchimp and Final Draft are some of the companies that built solutions based on the new API during the preview period. Zapier integrated the Docs API into its workflow automation tool for helping users to create offer letters based on a template. Netflix used it to build an internal tool that allows its engineers to gather data and automate its documentation workflow. This API will help the users to regularly create similar documents with changing order numbers and line items based on information from third-party systems. The API’s import/export abilities help users for using the Docs for internal content management systems. Few users are happy with this news and excited to use the API. One of the users commented on HackerNews, “That is such great work. Getting the job done with the tools already around is just such a good feeling.” Whereas, few others think that it will take some time for Google to reach where Microsoft is now. Another comment reads, “They will have a lot of catchup to do to get where Office is now. I'm frankly amazed by how good Microsoft Flow has been.” Another user commented, “Microsoft Flow is a really powerful - in terms of advanced capabilities it offers.” To know more about this news, check out Google’s official post. Apple and Google slammed by Human Rights groups for hosting Absher, a Saudi app that tracks women Youtube promises to reduce recommendations of ‘conspiracy theory’. Ex-googler explains why this is a ‘historic victory’ Google’s Adiantum, a new encryption standard for lower-end phones and other smart devices
Read more
  • 0
  • 0
  • 20216

article-image-activestate-platform-adds-thousands-of-curated-python-packages-to-its-platform
Fatema Patrawala
28 Nov 2019
3 min read
Save for later

ActiveState adds thousands of curated Python packages to its platform

Fatema Patrawala
28 Nov 2019
3 min read
On Tuesday, ActiveState a Canadian software company announced to add thousands of Python packages to its ActiveState Platform. ActiveState helps enterprises scale securely with open source languages and offers developers various tools to work on. More than 2 million developers and 97% of Fortune 1,000 enterprises use ActiveState to support mission-critical systems and speed up their software development process. The ActiveState Platform is a SaaS platform for open source language automation to centrally build, certify and resolve runtime environments. It incorporates more than 20 years of engineering expertise in order to automate much of the complexity associated with building, maintaining and sharing Python and Perl runtimes. With minimal knowledge, a developer can automatically build open source language runtimes, resolve dependencies, and certify it against compliance and security criteria. The result is a consistent, reproducible runtime from development to production. In this latest installment, the company has added more than 50,000 package versions covering the most popular Python 2 and 3 packages, as well as their dependencies. These dependencies can be automatically resolved, built and packaged into runtimes to eliminate issues. “Python is one of the most popular programming languages on the planet right now, so it's no wonder that the majority of the more than 200,000 developers on the ActiveState Platform are asking us to do more to support their Python development efforts. In order to ensure our customers can automatically build all Python packages, even those that contain C code, we're designing systems to vet the code and metadata for every package in PyPI. Today's release is a significant first step toward that goal.” says Jeff Rouse, Vice president, product management at ActiveState. The company is preparing itself for Python 2 EOL and in the process, it has vetted thousands of key Python 2 packages critical to the support of customers' Python 2 applications. In addition, the company has added many of the most popular Python 3 packages to support the efforts of their broad and wide customer base. It is a significant milestone on the road to make all of the Python Package Index (PyPI) available on the ActiveState Platform. To know more about this news, check out the official press release by the company. Listen: How ActiveState is tackling “dependency hell” by providing enterprise-level support for open source programming languages [Podcast] Introducing ActiveState State Tool, a CLI tool to automate dev & test setups, workflows, share secrets and manage ad-hoc tasks Python 3.9 alpha 1 is now ready for testing PyPI announces 2FA for securing Python package downloads Getting Started with Python Packages
Read more
  • 0
  • 0
  • 20101

article-image-microsoft-introduces-static-typescript-as-an-alternative-to-embedded-interpreters-for-programming-mcu-based-devices
Sugandha Lahoti
04 Sep 2019
4 min read
Save for later

Microsoft introduces Static TypeScript, as an alternative to embedded interpreters, for programming MCU-based devices

Sugandha Lahoti
04 Sep 2019
4 min read
Microsoft yesterday unveiled Static TypeScript as an alternative to embedded interpreters. Static TypeScript (STS) is an implementation of a Static Compiler for TypeScript which runs in the web browser. It is primarily designed to aid school children in their computer science programming projects. STS is supported by a compiler that is also written in Typescript. It generates machine code that runs efficiently on Microcontrollers in the target RAM range of 16-256kB. Microsoft’s plan behind building Static TypeScript Microcontrollers are typically programmed in C, C++, or in assembly, none of which are particularly beginner friendly. MCUs that can run on modern languages such as JavaScript and Python usually involve interpreters like IoT.js, Duktape, or MicroPython. The problem with interpreters is high memory usage, leaving little room on the devices themselves for the program developers have written. Microsoft therefore decided to come with STS which is a more efficient alternative to the embedded interpreter approach. It is statically typed, which makes for a less surprising programming experience. Features of Static TypeScript STS eliminates most of the “bad parts” of JavaScript; following StrongScript, STS uses nominal typing for statically declared classes and supports efficient compilation of classes using classic techniques for vtables. The STS toolchain runs offline, once loaded into a web browser, without the need for a C/C++ compiler. The STS compiler generates efficient and compact machine code, which unlocks a range of application domains such as game programming for low resource devices . Deployment of STS user programs to embedded devices does not require app or device driver installation, just access to a web browser. The relatively simple compilation scheme for STS leads to surprisingly good performance on a collection of small JavaScript benchmarks, often comparable to advanced, state of the art JIT compilers like V8, with orders of magnitude smaller memory requirements. Differences with TypeScript In contrast to TypeScript, where all object types are bags of properties, STS has at runtime four kinds of unrelated object types: A dynamic map type has named (string-indexed) properties that can hold values of any type A function (closure) type A class type describes instances of a class, which are treated nominally, via an efficient runtime subtype check on each field/method access An array (collection) type STS Compiler and Runtime The STS compiler and toolchain (linker, etc.) are written solely in TypeScript. The source TypeScript program is processed by the regular TypeScript compiler to perform syntactic and semantic analysis, including type checking. The STS device runtime is mainly written in C++ and includes a bespoke garbage collector. The regular TypeScript compiler, the STS code generators, assembler, and linker are all implemented in TypeScript and run both in the web browser and on command line.  The STS toolchain, implemented in TypeScript, compiles STS to Thumb machine code and links this code against a pre-compiled C++ runtime in the browser, which is often the only available execution environment in schools. Static TypeScript is used in all MakeCode editors STS is the core language supported by Microsoft’s MakeCode Framework. MakeCode provides hands on computing education for students with projects. It enables the creation of custom programming experiences for MCU-based devices. Each MakeCode editor targets programming of a specific device or device class via STS. STS supports the concept of a package, a collection of STS, C++ and assembly files, that also can list other packages as dependencies. This capability has been used by third parties to extend the MakeCode editors, mainly to accommodate hardware peripherals for various boards. STS is also used in MakeCode Arcade. With Arcade, STS lets developers of all skill levels easily write cool retro-style pixelated games. The games are designed by the user to be run either inside a virtual game console in the browser or on inexpensive microcontroller-based handhelds. For more in-depth information, please read the research paper. People were quite interested in this development. A comment on Hacker News reads, “This looks very interesting. If all it takes is dropping “with, eval, and prototype inheritance” to get fast and efficient JS execution, I’m all for it.” Other news in tech TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support and more Microsoft announces XLOOKUP for Excel users that fixes most VLOOKUP issues Microsoft announces its support for bringing exFAT in the Linux kernel; open sources technical specs
Read more
  • 0
  • 0
  • 20090

article-image-typescript-3-6-releases-with-stricter-generators-new-functions-in-typescript-playground-better-unicode-support-for-identifiers-and-more
Vincy Davis
29 Aug 2019
4 min read
Save for later

TypeScript 3.6 releases with stricter generators, new functions in TypeScript playground, better Unicode support for identifiers, and more

Vincy Davis
29 Aug 2019
4 min read
Yesterday, the Program Manager at Typescript, Daniel Rosenwasser announced the release of TypeScript 3.6. This is a major release of TypeScript as it contains many new features in Language and Compiler such as stricter generators, more accurate array spread, improved UX around Promises, better Unicode support for identifiers, and more. TypeScript 3.6 also explores a new TypeScript playground, new Editor features, and many breaking changes. TypeScript 3.6 beta was released last month. Language and Compiler improvements Stricter checking to Iterators and Generators Previously, generator users in TypeScript could not differentiate if a value was yielded or returned from a generator. In TypeScript 3.6, due to changes in the Iterator and IteratorResult type declarations, a new type called the Generator type has been introduced. It is an Iterator that will have both the return and throw methods present. This will allow a stricter generator checker to easily understand the difference between the values from their iterators. TypeScript 3.6 also infers certain uses of yield within the body of a generator function. The yield expression can be used explicitly to enforce the type of values that can be returned, yielded, and evaluated. More accurate array spread In pre-ES2015 targets, TypeScript uses a by default --downlevelIteration flag to use iterative constructs with arrays. However, many users found it undesirable that emits produced by it had no defined property slots. To address this problem, TypeScript 3.6 presents a new __spreadArrays helper. It will “accurately model what happens in ECMAScript 2015 in older targets outside of --downlevelIteration.” Improved UX around Promises TypeScript 3.6 explores new improvements in the Promise API, which is one of the most common ways to work with asynchronous data. TypeScript’s error messages will now inform the user if a then() or await content of a Promise API is not written before passing it to another function. The Promise API will also provide quick fixes in some cases. Better Unicode support for Identifiers TypeScript 3.6 contains better support for Unicode characters in identifiers when emitting to ES2015 and later targets. import.meta support in SystemJS: The new version supports the transformation of import.meta to context.meta when the module target is set to system. get and set accessors are allowed in ambient contexts: The previous versions of TypeScript did not allow the use of get and set accessors in ambient contexts. This feature has been changed in TypeScript 3.6, since the ECMAScript’s class fields proposal have differing behavior from an existing version of TypeScript. The official post also adds, “In TypeScript 3.7, the compiler itself will take advantage of this feature so that generated .d.ts files will also emit get/set accessors.” Read Also: Microsoft introduces Pyright, a static type checker for the Python language written in TypeScript New functions in TypeScript playground The TypeScript playground allows users to compile TypeScript and check the JavaScript output. It has more compiler options than typescriptlang and all the strict options are turned on by default in the playground. Following new functions are added in TypeScript Playground: The target option which allows users to switch out of es5 to es3, es2015, esnext, etc All the strictness flags Support for plain JavaScript files The post also states that in the future versions of TypeScript, more features like JSX support, and polishing automatic type acquisition can be expected. Breaking Changes Class members named "constructor" are now simply constructor functions. DOM updates like the global window will no longer be defined as type Window. Instead, it is defined as type Window & typeof globalThis. In JavaScript files, TypeScript will only consult immediately preceding JSDoc comments to figure out declared types. TypeScript 3.6 will not allow the escape sequences. Developers have liked the new features in TypeScript 3.6. https://twitter.com/zachcodes/status/1166840093849473024 https://twitter.com/joshghent/status/1167005999204638722 https://twitter.com/FlorianRappl/status/1166842492718899200 Interested users can check out TypeScript’s 6-month roadmap. Visit the Microsoft blog for full updates of TypeScript 3.6. Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more Babel 7.5.0 releases with F# pipeline operator, experimental TypeScript namespaces support, and more TypeScript 3.5 releases with ‘omit’ helper, improved speed, excess property checks and more
Read more
  • 0
  • 0
  • 20088

article-image-macos-terminal-emulator-iterm2-3-3-is-here-with-new-python-scripting-api-a-scriptable-status-bar-minimal-theme-and-more
Vincy Davis
02 Aug 2019
4 min read
Save for later

MacOS terminal emulator, iTerm2 3.3.0 is here with new Python scripting API, a scriptable status bar, Minimal theme, and more

Vincy Davis
02 Aug 2019
4 min read
Yesterday, the team behind iTerm2, the GPL-licensed terminal emulator for macOS, announced the release of iTerm2 3.3.0. It is a major release with many new features such as the new Python scripting API, a new scriptable status bar, two new themes, and more. iTerm2 is a successor to iTerm and works on all macOS. It is an open source replacement for Apple's Terminal and is highly customizable as comes with a lot of useful features. Major highlights in iTerm2 3.3.0 A new Python scripting API which can control iTerm2 and extend its behavior has been added. It allows users to write Python scripts easily, thus enabling them to do extensive configuration and customization in iTerm2 3.3.0. A new scriptable status bar has been added with 13 built-in configurable components. iTerm2 3.3.0 comes with two new themes. The first theme is called as Minimal and it helps reducing visual cluster. The second theme can move tabs into the title bar, thus saving space while maintaining the general appearance of a macOS app and is called Compact. Other new features in iTerm2 3.3.0 The session, tab and window titles have been given a new appearance to make it more flexible and comprehensible. It is now possible to configure these titles separately and also to select what type of information it shows per profile. These titles are integrated with the new Python scripting API. The tabs title has new icons, which either indicates a running app or a fixed icon per profile. A new tool belt called ‘Actions’ has been introduced in iTerm2 3.3.0. It provides shortcuts  to frequent actions like sending a snippet of a text. A new utility ‘it2git’ which allows the git status bar component to show git state on a remote host, has been added. New support for crossed-out text (SGR 9) and automatically restarting a session when it ends has also been added in iTerm2 3.3.0. Other Improvements in iTerm2 3.3.0 Many visual improvements Updated app icon Various pages of preferences have been rearranged to make it more visually appealing The password manager can be used to enter a password securely A new option to log Automatic Profile Switching messages to the scripting console has been added The long scrollback history’s performance has been improved Users love the new features in iTerm2 3.3.0 release, specially the new Python API, the scriptable status bar and the new Minimal mode. https://twitter.com/lambdanerd/status/1157004396808552448 https://twitter.com/alloydwhitlock/status/1156962293760036865 https://twitter.com/josephcs/status/1157193431162036224 https://twitter.com/dump/status/1156900168127713280 A user on Hacker News comments, “First off, wow love the status bar idea.” Another user on Hacker News says “Kudos to Mr. Nachman on continuing to develop a terrific piece of macOS software! I've been running the 3.3 betas for a while and some of the new functionality is really great. Exporting a recording of a terminal session from the "Instant Replay" panel is very handy!” Few users are not impressed with iTerm2 3.3.0 features and are comparing it with the Terminal app. A comment on Hacker News reads, “I like having options but wouldn’t recommend iTerm. Apple’s Terminal.app is more performant rendering text and more responsive to input while admittedly having somewhat less unnecessary features. In fact, iTerm is one of the slowest terminals out there! iTerm used to have a lot of really compelling stuff that was missing from the official terminal like tabs, etc that made straying away from the canonical terminal app worth it but most of them eventually made their way to Terminal.app so nowadays it’s mostly just fluff.” For the full list of improvements in iTerm2 3.3.0, visit the iTerm2 changelog page. Apple previews macOS Catalina 10.15 beta, featuring Apple music, TV apps, security, zsh shell, driverKit, and much more! WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and iPad and more Safari Technology Preview release 83 now available for macOS Mojave and macOS High Sierra
Read more
  • 0
  • 0
  • 20025
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-python-comes-third-in-tiobe-popularity-index-for-the-first-time
Prasad Ramesh
10 Sep 2018
2 min read
Save for later

Python comes third in TIOBE popularity index for the first time

Prasad Ramesh
10 Sep 2018
2 min read
Python made it to the TIOBE index in the third position for the first time in its history. The TIOBE programming community index is a common measure of programming language popularity. It is created and maintained by the TIOBE company based in the Netherlands. The popularity in the index is calculated based on the number of search engine results for search queries with the name of the language. They consider searches from Google, Google Blogs, MSN, Yahoo!, Baidu, Wikipedia, and YouTube. The TIOBE index is updated once a month. Source: TIOBE Python is third behind Java and C. Python’s rating is 7.653 percent while Java had a rating of 17.436 percent. C was in the second place rated at 15.447 percent. Python moved above C++ to be placed third. C++ was third last month and now is in the fourth place this month, with a rating of 7.394 percent. Python has increasing ubiquity, being used in many research areas like AI and machine learning which are all the buzz today. The increasing popularity is not surprising as Python has versatile applications. AI and machine learning, software development, web development, scripting, scientific applications, and even games, you name it. Python is easy to install, learn, use, and deploy. The syntax is also very simple and beginner friendly. TIOBE states that this third place took a really took a long time. “At the beginning of the 1990s it entered the chart. Then it took another 10 years before it reached the TIOBE index top 10 for the first time. After that it slowly but surely approached the top 5 and eventually the top 3.“ Python has also been the language of the year in the index for the years 2007 and 2010. The current top 5 languages are Java, C, Python, C++, and Visual Basic .NET. To read more and to view the complete list of the index, visit the TIOBE website. Build a custom news feed with Python [Tutorial] Home Assistant: an open source Python home automation hub to rule all things smart Build botnet detectors using machine learning algorithms in Python [Tutorial]
Read more
  • 0
  • 0
  • 19866

article-image-introducing-ballista-a-distributed-compute-platform-based-on-kubernetes-and-rust
Amrata Joshi
18 Jul 2019
3 min read
Save for later

Introducing Ballista, a distributed compute platform based on Kubernetes and Rust

Amrata Joshi
18 Jul 2019
3 min read
Andy Grove, a software engineer introduced Ballista, a distributed compute platform and in his recent blog post, he explained his journey on this project. Roughly around eighteen months ago, he started the DataFusion project, an in-memory query engine that uses Apache Arrow as the memory model. The aim was to build a distributed compute platform in Rust that can compete with Apache Spark but which turned out to be difficult for him. Grove writes in a blog post, “Unsurprisingly, this turned out to be an overly ambitious goal at the time and I fell short of achieving that. However, some very good things came out of this effort. We now have a Rust implementation of Apache Arrow with a growing community of committers, and DataFusion was donated to the Apache Arrow project as an in-memory query execution engine and is now starting to see some early adoption.” He then took a break from working on Arrow and DataFusion for a couple of months and focused on some deliverables at work.  He then started a new PoC (Proof of Concept) project which was his second attempt at building a distributed platform with Rust. But this time he had the advantage of already having Arrow and DataFusion in his plate. His new project is called Ballista, a distributed compute platform that is based on Kubernetes and the Rust implementation of Apache Arrow.  A Ballista cluster currently comprises of a number of individual pods within a Kubernetes cluster and it can be created and destroyed via the Ballista CLI. Ballista applications can be deployed to Kubernetes with the help of Ballista CLI and they use Kubernetes service discovery for connecting to the cluster. Since there is no distributed query planner yet, Ballista applications must manually build the query plans that need to be executed on the cluster.  To make this project practically work and push it beyond the limit of just a PoC, Grove listed some of the things on the roadmap for v1.0.0: First is to implement a distributed query planner. Then bringing support for all DataFusion logical plans and expressions. User code has to be supported as part of distributed query execution. They plan to bring support for interactive SQL queries against a cluster with gRPC. Support for Arrow Flight protocol and Java bindings. This PoC project will help in driving the requirements for DataFusion and it has already led to three DataFusion PRs that are being merged into the Apache Arrow codebase. It seems there are mixed reviews for this initiative, a user commented on HackerNews, “Hang in there mate :) I really don't think you deserve a lot of the crap you've been given in this thread. Someone has to try something new.” Another user commented, “The fact people opposed to your idea/work means it is valuable enough for people to say something against and not ignore it.” To know more about this news, check out the official announcement.  Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] Rust 1.36.0 releases with a stabilized ‘Future’ trait, NLL for Rust 2015, and more Introducing Vector, a high-performance data router, written in Rust  
Read more
  • 0
  • 0
  • 19811

article-image-angular-localization-with-ivy-from-angular-blog-medium
Matthew Emerick
09 Sep 2020
5 min read
Save for later

Angular localization with Ivy from Angular Blog - Medium

Matthew Emerick
09 Sep 2020
5 min read
Part of the new Angular rendering engine, Ivy, includes a new approach to localizing applications — specifically extracting and translating text. This article explains the benefits and some of the implementation of this new approach. Prior to Ivy, the only way to add localizable messages to an Angular application was to mark them in component templates using the i18n attribute: <div i18n>Hello, World!</div> The Angular compiler would replace this text when compiling the template with different text if a set of translations was provided in the compiler configuration. The i18n tags are very powerful — they can be used in attributes as well as content; they can include complex nested ICU (International Components for Unicode) expressions; they can have metadata attached to them. See our i18n guide for more information. But there were some shortcomings to this approach. The most significant concern was that translation had to happen during template compilation, which occurs right at the start of the build pipeline. The result of this is that that full build, compilation-bundling-minification-etc, had to happen for each locale that you wanted to support in your application. (build times will vary based on project size) If a single build took 3 minutes, then the total build time to support 9 locales would be 3 mins x 9 locales = 27 mins. Moreover, it was not possible to mark text in application code for translation, only text in component templates. This resulted in awkward workarounds where artificial components were created purely to hold text that would be translated. Finally, it was not possible to load translations at runtime, which meant it was not possible for applications to be provided to an end-user who might want to provide translations of their own, without having to build the application themselves. The new localization approach is based around the concept of tagging strings in code with a template literal tag handler called $localize. The idea is that strings that need to be translated are “marked” using this tag: const message = $localize `Hello, World!`; This $localize identifier can be a real function that can do the translation at runtime, in the browser. But, significantly, it is also a global identifier that survives minification. This means it can act simply as a marker in the code that a static post-processing tool can use to replace the original text with translated text before the code is deployed. For example, the following code: warning = $localize `${this.process} is not right`; could be replace with: warning = "" + this.process + ", ce n'est pas bon."; The result is that all references to $localize are removed, and there is zero runtime cost to rendering the translated text. The Angular template compiler, for Ivy, has been redesigned to generate $localize tagged strings rather than doing the translation itself. For example the following template: <h1 i18n>Hello, World!</h1> would be compiled to something like: ɵɵelementStart(0, "h1"); // <h1>ɵɵi18n(1, $localize`Hello, World!`); // Hello, World!ɵɵelementEnd(); // </h1> This means that after the Angular compiler has completed its work all the template text marked with i18n attributes have been converted to $localize tagged strings which can be processed just like any other tagged string. Notice also that the $localize tagged strings can occur in any code (user code or generated from templates in both applications or libraries) and are not affected by minification, so while the post-processing tool might receive code that looks like this ...var El,kl=n("Hfs6"),Sl=n.n(kl);El=$localize`Hello, World!`;let Cl=(()=>{class e{constructor(e)... it is still able to identify and translate the tagged message. The result is that we can reorder the build pipeline to do translation at the very end of the process, resulting in a considerable build time improvement. (build times will vary based on project size) Here you can see that the build time is still 3 minutes, but since the translation is done as a post-processing step, we only incur that build cost once. Also the post-processing of the translations is very fast since the tool only has to parse the code for $localize tagged strings. In this case around 5 seconds. The result is that the total build time for 9 locales is now 3 minutes + ( 9 x 5 seconds) = 3 minutes 45 seconds. Compared to 27 minutes for the pre-Ivy translated builds. Similar improvements have been seen in real life by teams already using this approach: The post-processing of translations is already built into the Angular CLI and if you have configured your projects according to our i18n guide you should already be benefitting from these faster build times. Currently the use of $localize in application code is not yet publicly supported or documented. We will be working on making this fully supported in the coming months. It requires new message extraction tooling — the current (pre-Ivy) message extractor does not find $localize text in application code. This is being integrated into the CLI now and should be released as part of 10.1.0. We are also looking into how we can better support translations in 3rd party libraries using this new approach. Since this would affect the Angular Package Format (APF) we expect to run a Request for Comment (RFC) before implementing that. In the meantime, enjoy the improved build times and keep an eye out for full support of application level localization of text. Angular localization with Ivy was originally published in Angular Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
Read more
  • 0
  • 0
  • 19802

article-image-debian-10-2-buster-linux-distribution-releases-with-the-latest-security-and-bug-fixes
Bhagyashree R
18 Nov 2019
3 min read
Save for later

Debian 10.2 Buster Linux distribution releases with the latest security and bug fixes

Bhagyashree R
18 Nov 2019
3 min read
Last week, the Debian team released Debian 10.2 as the latest point release to the "Buster" series. This release includes a number of bug fixes and security updates. In addition, starting this release Firefox ESR (Extended Support Release) is no longer supported on the ARMEL variant of Debian. Key updates in Debian 10.2 Security updates Some of the security fixes added in Debian 10.2 are: Apache2: These five vulnerabilities reported in the Apache HTTPD server are fixed:  CVE-2019-9517, CVE-2019-10081, CVE-2019-10082, CVE-2019-10092, CVE-2019-10097, CVE-2019-10098. Nghttp2: Two vulnerabilities, CVE-2019-9511 and CVE-2019-9513 found in the HTTP/2 code of the nghttp2 HTTP server are fixed. PHP 7.3: In PHP five security issues were fixed that could result in information disclosure or denial of service. These were CVE-2019-11036, CVE-2019-11039, CVE-2019-11040, CVE-2019-11041, CVE-2019-11042. Linux: In the Linux kernel five security issues were fixed that may have otherwise lead to a privilege escalation, denial of service, or information leaks. These were CVE-2019-14821, CVE-2019-14835, CVE-2019-15117, CVE-2019-15118, CVE-2019-15902. Thunderbird: The security issues reported in Thunderbird could have potentially resulted in the execution of arbitrary code, cross-site scripting, and information disclosure. These are tracked as CVE-2019-11739, CVE-2019-11740, CVE-2019-11742, CVE-2019-11743, CVE-2019-11744, CVE-2019-11746, CVE-2019-11752. Bug fixes Debian 10.2 brings several new bug fixes for some popular packages, some of which are: Emacs: The European Patent Litigation Agreement (EPLA) key is now updated. Flatpak: Debian 10.2 includes the new upstream stable release of Flatpak, a tool for building and distributing desktop applications on Linux. GNOME Shell: In addition to including the new upstream stable release of GNOME Shell, this release fixes truncation of long messages in Shell-modal dialogs and avoids crash on the reallocation of dead actors LibreOffice: The PostgreSQL driver with PostgreSQL 12 is now fixed. Systemd: Starting from Debian 10.2, the reload failure does not get propagated to service results. The ‘sync_file_range’ failures in nspawn containers on ARM and PPC systems are fixed. uBlock: The uBlock adblocker is updated to its new upstream version and is compatible with Firefox ESR68. These were some of the updates in Debian 10.2. Check out the official announcement by the Debian team to know what else has shipped in this release. Severity issues raised for Python 2 Debian packages for not supporting Python 3 Debian 10 codenamed ‘buster’ released, along with Debian GNU/Hurd 2019 as a port Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap
Read more
  • 0
  • 0
  • 19712
article-image-electron-5-0-ships-with-new-versions-of-chromium-v8-and-node-js
Sugandha Lahoti
25 Apr 2019
2 min read
Save for later

Electron 5.0 ships with new versions of Chromium, V8, and Node.js

Sugandha Lahoti
25 Apr 2019
2 min read
After publicly sharing the release timeline for Electron 5.0 and beyond in February, On Tuesday Electron 5.0 was released, as per the plan, with new features, upgrades, and fixes. Electron ships with the latest version upgrades of core components Chromium, Node.js, and V8: Chromium 73.0.3683.119, Node.js 12.0.0, and V8 7.3.492.27. Electron 5.0 also includes improvements to Electron-specific APIs. With this release, Electron 2.0.x has reached end of life. Major changes in Electron 5.0 Packaged apps will now behave the same as the default app. A default application menu will be created (unless the app has one) and the window-all-closed event will be automatically handled. (unless the app handles the event) Mixed sandbox mode is now enabled by default. Renderers launched with sandbox: true will now be actually sandboxed, where previously they would only be sandboxed if mixed-sandbox mode was also enabled. The default values of nodeIntegration and webviewTag are now false to improve security. The SpellCheck API has been changed to provide asynchronous results. New features BrowserWindow now supports managing multiple BrowserViews within the same BrowserWindow. Electron 5 continues with Electron's Promisification initiative.  This initiative will convert callback-based functions in Electron to return Promises. During this transition period, both the callback and Promise-based versions of these functions will work correctly, and will both be documented. A total of 12 APIs were converted for Electron 5.0. Three functions were changed or added to systemPreferences to access macOS systems' colors. These include systemPreferences.getAccentColor, systemPreferences.getColor, and systemPreferences.getSystemColor The function process.getProcessMemoryInfo has been added to get memory usage statistics about the current process. New remote events have been added to improve security in the remote API. Now, remote.getBuiltin, remote.getCurrentWindow, remote.getCurrentWebContents and <webview>.getWebContents can be filtered. Deprecated APIs Three APIs are newly deprecated in Electron 5.0.0 and planned for removal in 6.0.0. These include Mksnapshot binaries for arm and arm64, ServiceWorker APIs on WebContents, and Automatic modules with sandboxed webContents. These are just a select few updates. For other specific details, you may see the release notes.  Also, check out the tentative 6.0.0 schedule for key dates in the Electron 6 development life cycle. Users can install Electron 5.0 with npm via npm install electron@latest or download the tarballs from Electron releases page. The Electron team publicly shares the release timeline for Electron 5.0 Flutter challenges Electron, soon to release a desktop client to accelerate mobile development How to create a desktop application with Electron [Tutorial]
Read more
  • 0
  • 0
  • 19667

article-image-github-deprecates-and-then-restores-network-graph-after-github-users-share-their-disapproval
Vincy Davis
02 May 2019
2 min read
Save for later

GitHub deprecates and then restores Network Graph after GitHub users share their disapproval

Vincy Davis
02 May 2019
2 min read
Yesterday, GitHub announced in a blog post that they are deprecating the Network Graph from the repository’s Insights panel and that visits to this page will be redirected to the forks page instead. Following this announcement, they removed the network graph. On the same day, however, they deleted the blog post and also added back the network graph. The network graph is one of the useful features for developers on GitHub. It is used to display the branch history of the entire repository network, including branches of the root repository and branches of forks that contain commits unique to the network. Users of GitHub were alarmed on seeing the blog post about the removal of network graph without any prior notification or provision of a suitable replacement. For many users, this meant a significant burden of additional work. https://twitter.com/misaelcalman/status/1123603429090373632 https://twitter.com/theterg/status/1123594154255187973 https://twitter.com/morphosis7/status/1123654028867588096 https://twitter.com/jomarnz/status/1123615123090935808 Following the backlash and requests to bring back the Graph Network, on the same day, the Community Manager of GitHub posted on its community forum, that they will be reverting this change, based on the users’ feedback. Later on, the blog post announcing the deprecation was removed and the network graph was back on its website. This has brought a huge sigh of relief amongst GitHub’s users. The feature is famous for checking the state of a repository and the relationship between active branches. https://twitter.com/dotemacs/status/1123851067849097217 https://twitter.com/AlpineLakes/status/1123765300862836737 GitHub has not yet officially commented on why they removed the network graph in the first place. A Reddit user has put up an interesting shortlist of suspicions: The cost-benefit analysis from "The Top" determined that the compute time for generating the graph was too expensive, and so they "moved" the feature to a more premium account. "Moved" could also mean unceremoniously kill off the feature because some manager thought it wasn't shiny enough. Microsoft buying GitHub made (and will continue to make) GitHub worse, and this is just a harbinger of things to come. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Apache Software Foundation finally joins the GitHub open source community Microsoft and GitHub employees come together to stand with the 996.ICU repository
Read more
  • 0
  • 0
  • 19562

article-image-qt-for-python-5-11-released
Pavan Ramchandani
18 Jun 2018
3 min read
Save for later

Qt for Python 5.11 released!

Pavan Ramchandani
18 Jun 2018
3 min read
The Qt team, in their blog, announced the official release of Qt with Python support. This is the first official of Qt framework with the support for Python and this release is tagged as Qt for Python 5.11. Previously  Python support for Qt developers was provided through the development of PySide module and now the work is said to have been done on PySide 2 to provide Qt for Python. However, Qt team has been working on the core Qt framework for quite some time to incorporate Python support and this is the first breakthrough in that direction. Adding to this, the Qt team has also informed that r version of Qt earlier than v5.11 will not support Python. In the release notes, the team has mentioned that the following versions of Qt will continue supporting this project and make the support for Python, stable going ahead. This is said to be the preview release, with a list of known issues for early adopters. The team is hoping to receive the feedback from the users so that it can make the binding more smooth and rectify the bugs. A lot of work has also gone into keeping the Qt syntax unchanged for flexible migration from C++, the de facto language for developing UI with Qt, to Python and the other way round. It mentions in the release blog, that the major roadblock in providing the Python binding for the C++ based Qt was the size of packages. This made the team to work on using external tools for Qt scripting with Python, which had resulted in the development of PySide in 2009. To extend the support for Python, the work has been done on C++ headers in Qt framework, so that the developers can write modules in Python. These efforts resulted in the latest PySide 2 which has very less overhead for using Python and Qt for GUI development. The Qt team has worked on developing the documentation for this and has provided examples enables you to understand the binding. Along with the Python binding for the core Qt framework, the team has also extended support for various Qt toolkits like Qtwidgets and QML to build interactive GUI with Qt and Python. For the early adopters of Qt for Python, to report a bug you use the Qt for Python project on bugreports.qt.io. The team can be reached on Freenode with #qt-pyside. Read more Qt 5.11 has arrived! WebAssembly comes to Qt. Now you can deploy your next Qt app in browser
Read more
  • 0
  • 0
  • 19553
article-image-bitbucket-goes-down-for-over-an-hour
Natasha Mathur
25 Oct 2018
2 min read
Save for later

BitBucket goes down for over an hour

Natasha Mathur
25 Oct 2018
2 min read
Bitbucket, a web-based version control repository that allows users to manage and share their Git repositories as a team, suffered an outage today. As per the Bitbucket’s incident page, the outage started at 8 AM UTC today and lasted for over an hour, till 9:02 AM UTC,  before finally getting back to its normal state. The Bitbucket team tweeted regarding the outage, saying: https://twitter.com/BitbucketStatus/status/1055372361036312576 It was only earlier this week when GitHub went down for a complete day due to failure in its data storage system. In the case of GitHub, there was no obvious way to tell if the site was down as the website’s backend git services were working. However, users were not able to log in, outdated files were being served, branches went missing, and they were unable to submit Gists, bug reports, posts, etc among other related issues. Bitbucket, however, was completely broken during the entirety of the outage, as all the services from pipelines to actually getting at the code were down. It was evidently clear that the site was not working as it showed the “Internal Server” error. BitBucket hasn’t spoken out regarding the real cause of the outage, however, as per the BitBucket status page, the site had been experiencing elevated error rates, and degraded BitBucket functionality, for the past two days. This could be the possible reason for the outage. After the outage was over, Bitbucket tweeted about the recovery, saying: https://twitter.com/BitbucketStatus/status/1055384158392922112 As the services were down, developers and coders around the world took to Twitter to vent their frustration. https://twitter.com/HeinrichCoetzee/status/1055370890127519744 https://twitter.com/montakurt/status/1055372412651495424 https://twitter.com/CapAmericanec/status/1055370560606294016   Developers rejoice! Github announces Github Actions, Github connect and much more to improve development workflows GitHub is bringing back Game Off, its sixth annual game building competition, in November GitHub comes to your code Editor; GitHub security alerts now have machine intelligenc
Read more
  • 0
  • 0
  • 19553

article-image-5-things-you-need-to-know-about-java-10
Amarabha Banerjee
07 Jun 2018
3 min read
Save for later

5 Things you need to know about Java 10

Amarabha Banerjee
07 Jun 2018
3 min read
Oracle has announced the release of Java 10 version on March 20. While this is not an LTS version, there are few changes in this version which are worth noting. In this article we’ll look at  5 of the most important things you’ll need to watch out for, especially if you’re a Java developer. Java releases long term support versions in every 3 year. As per this scheduling, their future long term support version, Java 11 will be releasing in Fall 2018. Java 10 is a precursor to that and contains some important changes which will take a clearer shape in the next version. Java 10 is trying to emulate some of the popular features of Scala and Kotlin. One of the primary reasons can be the growing popularity of Kotlin in both web and mobile development domain and also the dynamic typing capability in Scala and Kotlin both.  The introduction of local variable type is one of them. This feature implies that variables can now be declared as “var” and when you assign a certain integer or a string to it then the compiler will automatically know what type of variable it is. Although this doesn’t make Java a dynamically typed language like Python, still this allows a lot more flexibility for the programmers and lets them avoid boilerplates in their code. There are 2 JEPs in JDK 10 that focus on improving the current Garbage Collection (GC) elements. The first one, Garbage-Collector Interface (JEP 304) will introduce a clean garbage collector interface to help improve the source code isolation of different garbage collectors. In current Java versions there are bits and pieces of GC source files scattered all over the HotSpot sources. This becomes an issue when implementing a new garbage collector, since developers have to know where to look for those source files. One of the main goals of this JEP is to introduce better modularity for HotSpot internal GC code, have a cleaner GC interface and make it easier to implement new collectors. Java 10 promises to become much faster than its previous version by making the full garbage collector parallel. This is a welcome move and change from the version 9 since this allows the developers scope to better allocate memory and use the GC (Garbage Collector) in parallel. The GC  in the previous versions didn’t have the capability to load values in parallel and that made it heavy and difficult to operate for complex applications. The present parallel GC removes that factor and makes it much more lightweight and efficient. Java 10 enables programmers to allow heap allocation on alternative memory devices. This feature lets the Java VM decide on the most important tasks and then allocate maximum memory for those priority processes with other processes are allocated to alternative memory. This helps in fastening up the overall process. This change is important for the Java developers because this will help them in better and efficient memory management and hence will increase the performance of their applications. With these changes, Java 10 has opened up the doors for a more open and flexible language which is looking towards the future. With Kotlin breathing down its neck as a worthy alternative, the stage is set for Java to work towards a more dynamic and easy to use power packed version 11 in 2018 fall. We would be waiting for that along with the Java developers for sure. What can you expect from the upcoming Java 11 JDK? Oracle reveals issues in Object Serialization. Plans to drop it from core Java. Java Multithreading: How to synchronize threads to implement critical sections and avoid race conditions  
Read more
  • 0
  • 0
  • 19516
Modal Close icon
Modal Close icon