Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-chris-dickinson-on-how-to-implement-git-in-rust
Amrata Joshi
02 Apr 2019
3 min read
Save for later

Chris Dickinson on how to implement Git in Rust

Amrata Joshi
02 Apr 2019
3 min read
Chris Dickinson, a developer working on implementing Git in Rust shared updates on his project Git-rs. This is his second try over the same project. He writes, “I'm trying again this year after reading more of "Programming Rust" (Blandy, Orendorff).” Dickinson has maintained a ‘To Do’ list wherein he has written the steps right from reading the objects from loose store to creating a packfile and publishing it to crates. You can checkout his full project for his day-by-day updates. It is also quite interesting to see how developers are sharing their projects on Git and learning something new on a daily basis based on their experience. Users are overall happy to see Dickinson’s contribution. A user commented on Reddit, “Maybe everybody is happy just to use this as a personal learning experience for now, but I think there will be a lot of interest in a shared project eventually.” Users are also sharing their experiences from their own projects. A user commented on HackerNews, “I love to see people reimplementing existing tools on their own, because I find that to be a great way to learn more about those tools. I started on a Git implementation in Rust as well, though I haven't worked on it in a while.” Why work with Rust? Rust has been gaining tremendous popularity in recent times. Steve Klabnik, a popular blogger/developer shares his experiences working with Rust and how the language has outgrown him. He writes in his blog post, “I’m the only person who has been to every Rust conference in existence so far. I went to RustCamp, all three RustConfs, all five RustFests so far, all three Rust Belt Rusts. One RustRush. Am I forgetting any? Thirteen Rust conferences in the past four years.” He further adds, “ I’m starting to get used to hearing “oh yeah our team has been using Rust in production for a while now, it’s great.” The first time that happened, it felt very strange. Exciting, but strange. I wonder what the next stage of Rust’s growth will feel like.” Rust is also in the top fifteen (by the number of pull requests) as of 2018 in the GitHub Octoverse report. Moreover, according to the Go User Survey 2018, 19% of the respondents ranked it as a top preferred language which indicates a high level of interest in Rust among this audience. Last month, the team at Rust announced the stable release, Rust 1.33.0. This release brought improvements to const fns, compiler, and libraries. Last week, the Rust community organized the Rust Latam 2019 Conference at Montevideo for the Rust community. It involved 200+ Rust developers and enthusiasts from the world. https://twitter.com/Sunjay03/status/1112095011951308800 ‘Developers’ lives matter’: Chinese developers protest over the “996 work schedule” on GitHub Sublime Text 3.2 released with Git integration, improved themes, editor control and much more! Microsoft open sources the Windows Calculator code on GitHub  
Read more
  • 0
  • 0
  • 18067

article-image-microsoft-azure-reportedly-chooses-xilinx-chips-over-intel-altera-for-ai-co-processors-says-bloomberg-report
Melisha Dsouza
31 Oct 2018
3 min read
Save for later

Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report

Melisha Dsouza
31 Oct 2018
3 min read
Xilinx Inc., has reportedly won orders from Microsoft Corp.’s Azure cloud unit to account for half of the co-processors currently used on Azure servers to handle machine-learning workloads. This will replace the chips made by Intel Corp, according to people familiar with Microsoft's’ plans as reported by Bloomberg Microsoft’s decision comes with effect to add another chip supplier in order to serve more customers interested in machine learning. To date, this domain was run by Intel’s Altera division. Now that Xilinx has bagged the deal, does this mean Intel will no longer serve Microsoft? Bloomberg reported Microsoft’s confirmation that it will continue its relationship with Intel in its current offerings. A Microsoft spokesperson added that “There has been no change of sourcing for existing infrastructure and offerings”. Sources familiar with the arrangement also commented on how Xilinx chips will have to achieve performance goals to determine the scope of their deployment. Cloud vendors these days are investing heavily in research and development centering around the machine learning field. The past few years has seen an increase in need of flexible chips that can be configured to run machine-learning services. Companies like Microsoft, Google and Amazon are massive buyers of server chips and are always looking for alternatives to standard processors to increase the efficiency of their data centres. Holger Mueller, an analyst with Constellation Research Inc., told SiliconANGLE that “Programmable chips are key to the success of infrastructure-as-a-service providers as they allow them to utilize existing CPU capacity better, They’re also key enablers for next-generation application technologies like machine learning and artificial intelligence.” Earlier this year, Xilinx CEO Victor Peng made clear his plans to focus on data center customers, saying “data center is an area of rapid technology adoption where customers can quickly take advantage of the orders of magnitude performance and performance per-watt improvement that Xilinx technology enables in applications like artificial intelligence (AI) inference, video and image processing, and genomics” Last month, Xilinx made headlines with the announcement of a new breed of computer chips designed specifically for AI inference. These chips combine FPGAs with two higher-performance Arm processors, plus a dedicated AI compute engine and relates to the application of deep learning models in consumer and cloud environments. The chips promise higher throughput, lower latency and greater power efficiency than existing hardware. Looks like Xilinx is taking noticeable steps to make itself seen in the AI market. Head over to Bloomberg for the complete coverage of this news. Microsoft Ignite 2018: New Azure announcements you need to know Azure DevOps outage root cause analysis starring greedy threads and rogue scale units Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud    
Read more
  • 0
  • 0
  • 18059

article-image-googles-language-experts-are-listening-to-some-recordings-from-its-ai-assistant
Bhagyashree R
12 Jul 2019
4 min read
Save for later

Google’s language experts are listening to some recordings from its AI assistant

Bhagyashree R
12 Jul 2019
4 min read
After the news of Amazon employees listening to your Echo audio recordings, now we have the non-shocker report of Google employees doing the same. The news was reported by Belgian public broadcaster, VRT NWS on Wednesday. Addressing this news, Google accepted in yesterday’s blog post that it does this to make its AI assistant smarter to understand user commands regardless of what their language is. In its privacy policies, the tech giant states, “Google collects data that's meant to make our services faster, smarter, more relevant, and more useful to you. Google Home learns over time to provide better and more personalized suggestions and answers.” Its privacy policies also have a mention that it shares information with its affiliates and other trusted businesses. What it does not explicitly say is that these recordings are shared with its employees too. Google hires language experts to transcribe audio clips recorded by Google’s AI assistant who can end up listening to sensitive information about users. Whenever you make a request to Google Home smart speaker or any other smart speaker for that matter, your speech is recorded. These audio recordings are sent to the servers of the companies that they use to train their speech recognition and natural language understanding systems. A small subset of these recordings, 0.2% in the case of Google, are sent to language experts around the globe who transcribe them as accurately as possible. Their work is not about analyzing what the user is saying, but, in fact, how they are saying it. This helps Google’s AI assistant to understand the nuances and accents of a particular language. The problem is these recordings often contain sensitive data. Google in the blog post claims that these audio snippets are analyzed in an anonymous fashion, which means that reviewers will not be able to identify the user they are listening to. “Audio snippets are not associated with user accounts as part of the review process, and reviewers are directed not to transcribe background conversations or other noises, and only to transcribe snippets that are directed to Google,” the tech giant said. Countering this claim, VRT NWS was able to identify people through personal addresses and other sensitive information in the recordings. “This is undeniably my own voice,” said one man. Another family was able to recognize the voice of their son and grandson in the recording. What is worse is that sometimes these smart speakers record the audio clips entirely by accident. Despite the companies claiming that these devices only start recording when they hear their “wake words” like “Okay Google”, there are many reports showing the devices often start recording by mistake. Out of the thousand or so recordings reviewed by VRT NWS, 153 were captured accidentally. Google in the blog post mentioned that it applies “a wide range of safeguards to protect user privacy throughout the entire review process.” It further accepted that these safeguards failed in the case of the Belgian contract worker who shared the audio recordings to VRT NWS, violating the company’s data security and privacy rules in the process. “We just learned that one of these language reviewers has violated our data security policies by leaking confidential Dutch audio data. Our Security and Privacy Response teams have been activated on this issue, are investigating, and we will take action. We are conducting a full review of our safeguards in this space to prevent misconduct like this from happening again,” the tech giant wrote. Companies not being upfront about the transcription process can cause legal trouble for them. Michael Veale, a technology privacy researcher at the Alan Turing Institute in London, told Wired that this practice of sharing personal information of users might not meet the standards set by the EU’s GDPR regulations. “You have to be very specific on what you’re implementing and how. I think Google hasn’t done that because it would look creepy,” he said. Read the entire story on VRT NWS’s official website. You can watch the full report on YouTube. https://youtu.be/x8M4q-KqLuo Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash Amazon is being sued for recording children’s voices through Alexa without consent Amazon Alexa is HIPAA-compliant: bigger leap in the health care sector
Read more
  • 0
  • 0
  • 18057

article-image-gnome-3-32-says-goodbye-to-application-menus
Bhagyashree R
12 Oct 2018
3 min read
Save for later

GNOME 3.32 says goodbye to application menus

Bhagyashree R
12 Oct 2018
3 min read
On Tuesday, Gnome announced that they are planning on retiring the app menus from its next release, which is GNOME 3.32. Application menus or app menus are the menus that you see in the GNOME 3 top bar, with the name and icon for the current app. Why application menus are being removed in GNOME? The following are the reasons GNOME is bidding adieu to the application menus: Poor user engagement: Since their introduction, application menus have been a source of usability issues. The app menus haven’t been performing well over the years, despite efforts to improve them. Users don’t really engage with them. Two different locations for menu items: Another reason for the application menus not doing well could be the split between app menus and the menus in application windows. With two different locations for menu items, it becomes easy to look in the wrong place, particularly when one menu is more frequently visited than the other. Limited adoption by third-party applications: Application menus have seen limited adoption by third-party applications. They are often kept empty, other than the default quit item, and people have learned to ignore them. What guidelines developers must follow? All GNOME applications will have to move the items from its app menu to a menu inside the application window. Here are the guidelines that developers need to follow: Remove the app menu and move its menu items to the primary menu If required, split the primary menus into primary and secondary menus The about menu item should be renamed from "About" to "About application-name" Guidelines for the primary menu Primary menu is the menu you see in the header bar and has the icon with three stacked lines, also referred to as the hamburger menu. In addition to app menu items, primary menus can also contain other menu items. 2. Quit menu item is not required so it is recommended to remove it from all locations. 3. Move other app menu items to the bottom of the primary menu. 4. A typical arrangement of app menu items in a primary menu is a single group of items: Preferences Keyboard Shortcuts Help About application-name 5. Applications that use a menu bar should remove their app menu and move any items to the menu bar menus. If an application fails to remove the application menu by the release of GNOME 3.32, it will be shown in the app’s header bar, using the fallback UI that is already provided by GTK. Read the full announcement on GNOME’s official website. Fedora 29 beta brings Modularity, GNOME 3.30 support and other changes GNOME 3.30 released with improved Desktop performance, Screen Sharing, and more GIMP gets $100K of the $400K donation made to GNOME
Read more
  • 0
  • 0
  • 18056

article-image-kubernetes-1-15-releases-with-extensibility-around-core-kubernetes-apis-cluster-lifecycle-stability-and-more
Vincy Davis
20 Jun 2019
5 min read
Save for later

Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more!

Vincy Davis
20 Jun 2019
5 min read
Update: On July 23rd, the Enhancements Lead of Kubernetes 1.15 at VMware, Kenny Coleman,  published a “What's New in Kubernetes 1.15” video with Cloud Native Computing Foundation (CNCF). In the video, he explains in detail about the three new major features in Kubernetes 1.15, which include Dynamic HA Clusters with Kubeadm, Volume Cloning and CustomResourceDefinition (CRDs). Coleman also highlights each feature and explains its importance to users.  Watch the video below to know in detail about Kenny Coleman’s talk about Kubernetes 1.15. https://www.youtube.com/watch?v=eq7dgHjPpzc On June 19th, the Kubernetes team announced the release of Kubernetes 1.15, which consists of 25 enhancements, including 2 moving to stable, 13 in beta, and 10 in alpha. The key features of this release include extensibility around core Kubernetes APIs, cluster lifecycle stability, and usability improvements. This is Kubernetes’ second release this year. The previous version Kubernetes 1.14, released three months ago, had 10 stable enhancements--the most amount of stable features revealed in a release. In an interview to the New Stack, Claire Laurence, the team lead at Kubernetes said that in this release, “We’ve had a fair amount of features progress to beta. I think what we’ve been seeing a lot with these alpha and beta features as they progress is a lot of continued focus on stability and overall improvement before indicating that those features are stable.” Let’s have a brief look at all the new features and updates. #1 Extensibility around core Kubernetes APIs The theme of the new developments around CustomResourceDefinitions is data consistency and native behavior. The Kubernetes team wants that a user should not notice whether the interaction is with a CustomResource or with a Golang-native resource. Hence, from v1.15 onwards, Kubernetes will check each schema against a restriction called “structural schema”. This enforces non-polymorphic and complete typing of each field in a CustomResource. Out of the five enhancements, the ‘CustomResourceDefinition Defaulting’ is an alpha release. It is specified using the default keyword in the OpenAPI validation schema. Defaulting will be available as alpha in Kubernetes 1.15 for structural schemas. The other four enhancements are in beta which include: CustomResourceDefinition Webhook Conversion In Kubernetes, CustomResourceDefinitions gain the ability to convert between different versions on-the-fly, just like users are used to from native resources for the long term. CustomResourceDefinition OpenAPI Publishing OpenAPI publishing for CRDs will be available with Kubernetes 1.15 as beta, but only for structural schemas. CustomResourceDefinitions Pruning Pruning is the automatic removal of unknown fields in objects sent to a Kubernetes API. A field is unknown if it is not specified in the OpenAPI validation schema. It enforces that only data structures specified by the CRD developer are persisted to etcd. This is the behaviour of native resources, and will be available for CRDs as well, starting as beta in Kubernetes 1.15. Admission Webhook Reinvocation & Improvements In the earlier versions, mutating webhooks were only called once, in alphabetical order. An earlier run webhook cannot react on the output of webhooks, called later in the chain. With Kubernetes 1.15, mutating webhooks can opt-in into at least one re-invocation by specifying reinvocationPolicy: IfNeeded. If a later mutating webhook modifies the object, the earlier webhook will get a second chance. #2 Cluster Lifecycle Stability and Usability Improvements The cluster lifecycle building block, kubeadm, continues to receive features and stability work, which is needed for bootstrapping production clusters efficiently. kubeadm has promoted high availability (HA) capability to beta, allowing users to use the familiar kubeadm init and kubeadm join commands to configure and deploy an HA control plane. With kubeadm, certificate management has become more robust in 1.15, as it seamlessly rotates all the certificates before expiry. The kubeadm configuration file API is moving from v1beta1 to v1beta2 in 1.15. kubeadm now has its own new logo. Continued Improvement of CSI In Kubernetes 1.15, the Special Interests Groups (SIG) Storage enables migration of in-tree volume plugins to Container Storage Interface (CSI). SIG Storage worked on bringing CSI to feature parity with in-tree functionality, including functionality like resizing and inline volumes. SIG Storage introduces new alpha functionality in CSI that doesn’t exist in the Kubernetes Storage subsystem yet, like volume cloning. Volume cloning enables users to specify another PVC as a “DataSource” when provisioning a new volume. If the underlying storage system supports this functionality and implements the “CLONE_VOLUME” capability in its CSI driver, then the new volume becomes a clone of the source volume. Additional feature updates Support for go modules in Kubernetes Core Continued preparation on cloud provider extraction and code organization. The cloud provider code has been moved to kubernetes/legacy-cloud-providers for easier removal later and external consumption. Kubectl get and describe now work with extensions. Nodes now support third party monitoring plugins. A new Scheduling Framework for schedule plugins is now Alpha ExecutionHook API designed to trigger hook commands in the containers for different use cases is now Alpha. These extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs will continue to depreciate and eventually will be retired in the next version 1.16. To know about the additional features in detail check out the release notes. https://twitter.com/markdeneve/status/1141135440336039936 https://twitter.com/IanColdwater/status/1141485648412651520 For more details on Kubernetes 1.15, check out Kubernetes blog. HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Linkerd 2.3 introduces Zero-Trust Networking for Kubernetes
Read more
  • 0
  • 0
  • 18056

article-image-stockx-confirms-a-data-breach-impacting-6-8-million-customers
Sugandha Lahoti
09 Aug 2019
3 min read
Save for later

StockX confirms a data breach impacting 6.8 million customers

Sugandha Lahoti
09 Aug 2019
3 min read
StockX, an online marketplace for buying and selling sneakers, suffered a major data breach in May impacting 6.8 million customers. Records leaked included names, email addresses and hashed passwords. The full scale of this data breach came to light after an unnamed data breached seller contacted TechCrunch claiming information about the attack. Tech crunch then verified the claims by contacting people from a sample of 1,000 records using the information only they would know. StockX released a statement yesterday acknowledging that a data breach had indeed occurred. StockX says they were made aware of the breach on July 26 and immediately launched a forensic investigation and engaged experienced third-party data experts to assist. On getting evidence to suggest customer data may have been accessed by an unknown third party, they sent customers an email on August 3 to make them aware of the incident. This email surprisingly asked customers to reset their passwords citing system updates but said nothing about the data breach leaving users confused on what caused the alleged system update or why there was no prior warning. Later the same day, StockX confirmed that they had discovered a data security issue and confirmed that an unknown third-party was able to gain access to certain customer data, including customer name, email address, shipping address, username, hashed passwords, and purchase history. The hashes were encrypted using MD5 with salts. According to weleakinfo, this is a very weak hashing algorithm; at least 90% of all hashes can be cracked successfully. Users were infuriated that instead of being honest, StockX simply sent their customers an email asking them to reset their passwords. https://twitter.com/Asaud_7/status/1157843000170561536 https://twitter.com/kustoo/status/1157735133157314561 https://twitter.com/RunWithChappy/status/1157851839754383360 StockX released a system-wide security update, a full password reset of all customer passwords with an email to customers alerting them about resetting their passwords, a high-frequency credential rotation on all servers and devices and a lockdown of their cloud computing perimeter. However, they were a little too late in their ‘ongoing investigation’ as they mention on their blog. Techcrunch revealed that the seller had put the data for sale for $300 in a dark web listing and one person had already bought the data. StockX is also subject to EU’s General Data Protection Regulation considering it has a global customer base and can be potentially fined for the incident. https://twitter.com/ComplexSneakers/status/1157754866460221442 According to FTC, StockX is also not compliant with the US laws regarding a data breach. https://twitter.com/zruss/status/1157785830200619008 Following Capital One data breach, GitHub gets sued and AWS security questioned by a US Senator. British Airways set to face a record-breaking fine of £183m by the ICO over customer data breach. U.S. Senator introduces a bill that levies jail time and hefty fines for companies violating data breaches.
Read more
  • 0
  • 0
  • 18046
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-say-hello-to-ibm-rxn-a-free-ai-tool-in-ibm-cloud-for-predicting-chemical-reactions
Natasha Mathur
24 Aug 2018
3 min read
Save for later

Say hello to IBM RXN, a free AI Tool in IBM Cloud for predicting chemical reactions

Natasha Mathur
24 Aug 2018
3 min read
Say hello to IBM RXN, a free AI Tool in IBM Cloud for predicting chemical reactions Earlier this week, IBM launched an AI tool called IBM RXN in IBM cloud at the American Chemistry Society, Boston, for predicting chemical reactions in just seconds.  IBM RXN is an advanced AI model which is useful in daily research activities and experiments. IBM Research IBM presented a web-based app last year at the NIPS 2017 Conference, which is capable of relating organic chemistry to a language. It also applies state-of-the-art neural machine translation methods which take care of converting designing materials to generating products leveraging sequence-to-sequence (seq2seq) models. IBM RXN for Chemistry uses a system known as a simplified molecular-input line-entry system or SMILES. SMILES is used to represent a molecule as a sequence of characters. The model was trained using a combination of reaction datasets, equivalent to a total of 2 million reactions. SMILES in IBM RXN IBM RXN comprises of features such as Ketcher editor, pre-configured libraries, and challenge mode. Ketcher is a web-based chemical structure editor which is designed for chemists, lab scientists, and technicians. It involves selecting, modifying, and erasing the connected, and unconnected atom bonds with the help of a selection tool or shift key. There’s a cleanup tool which checks bond lengths, angles and spatial arrangement of atoms. It is also capable of checking the stereochemistry and structure layout with its advanced features. It is a simple data-driven tool which is trained without querying a database or any additional external information. Additionally, users can build projects and share them with friends or colleagues. There are Pre Configured libraries of molecules which enable adding reactants and reagents to your Ketcher board in just a few clicks. It also provides access to the most common molecules in organic chemistry via the installation of a library to your molecule set. You can also upload molecules to customize libraries. Enhancing the libraries with your own reaction outcomes or with molecules drawn on the Ketcher board is also possible. Finally, there is a challenge mode which puts your Organic Chemistry knowledge to test and helps with Organic grade preparation for class exams. IBM RXN is a completely free tool and available in the IBM cloud. For more information, check out the official IBM blog post. IBM’s DeepLocker: The Artificial Intelligence powered sneaky new breed of Malware Four IBM facial recognition patents in 2018, we found intriguing IBM unveils world’s fastest supercomputer with AI capabilities, Summit
Read more
  • 0
  • 0
  • 18044

article-image-typescript-3-2-released-with-configuration-inheritance-and-more
Prasad Ramesh
30 Nov 2018
7 min read
Save for later

TypeScript 3.2 released with configuration inheritance and more

Prasad Ramesh
30 Nov 2018
7 min read
TypeScript 3.2 was released yesterday. It is a language that brings static type-checking to JavaScript which enables developers to catch issues even before the code is run. TypeScript 3.2 includes the latest JavaScript features from ECMAScript standard. In addition to type-checking, it provides tooling in editors to jump to variable definitions, find the user of a function, and automate refactorings. You can install TypeScript 3.2 via NuGet or install via npm as follows: npm install -g typescript Now let’s look at the new features in TypeScript 3.2. strictBindCallApply TypeScript 3.2 comes with stricter checking for bind, call, and apply. In JavaScript, bind, call, and apply are methods on functions that allow actions like bind this and partially apply arguments. They also allow you to call functions with a different value for this, and call functions with an array for their arguments. Earlier, TypeScript didn’t have the power to model these functions. Demand to model these patterns in a type-safe way led the TypeScript developers to revisit this problem. There were two features that opened up the right abstractions to accurately type bind, call, and apply without any hard-coding: this parameter types from TypeScript 2.0 Modeling the parameter lists with tuple types from TypeScript 3.0 The combination of these two can ensure that the uses of bind, call, and apply are more strictly checked when we use a new flag called strictBindCallApply. When this new flag is used, the methods on callable objects are described by a new global type—CallableFunction. It declares stricter versions of the signatures for bind, call, and apply. Similarly, any methods on constructable (which are not callable) objects are described by a new global type called NewableFunction. A caveat of this new functionality is that bind, call, and apply can’t yet fully model generic functions or functions having overloads. Object spread on generic types JavaScript has a handy way of copying existing properties from an existing object into a new object called “spreads”. To spread an existing object into a new object, an element with three consecutive periods (...) can be defined. TypeScript does well in this area when it has enough information about the type. But it wouldn’t work with generics at all until now. A new concept in the type system called an “object spread type” could have been used. This would be a new type operator that looks like this: { ...T, ...U } to reflect the syntax of an object spread. If T and U are known, that type would flatten to some new object type. This approach was complex and required adding new rules to type relationships and inference. After exploring several different avenues, two conclusions arrived: Users were fine modeling the behavior with intersection types for most uses of spreads in JavaScript. For example, Foo & Bar. Object.assign: This a function that exhibits most of the behavior of spreading objects. It is already modeled using intersection types. There has been very little negative feedback around that. Intersections model the common cases and they’re relatively easy to reason about for both users and the type system. So now TypeScript 3.2 allows object spreads on generics and models them using intersections. Object rest on generic types Object rest patterns are kind of a dual to object spreads. It creates a new object that lacks some specified properties instead of creating a new object with some extra or overridden properties. Configuration inheritance via node_modules packages TypeScript has supported extending tsconfig.json files by using the extends field for a long time. This feature is useful to avoid duplicating configuration which can easily fall out of sync. It really works best when multiple projects are co-located in the same repository. This way each project can reference a common “base” tsconfig.json. But some projects are written and published as fully independent packages. Such projects don’t have a common file they can reference. So as a workaround, users could create a separate package and reference that. TypeScript 3.2 resolves tsconfig.jsons from node_modules. TypeScript will dive into node_modules packages when a bare path for the "extends" field in tsconfig.json is used. Diagnosing tsconfig.json with --showConfig The TypeScript compiler, tsc, now supports a new flag called --showConfig. On running tsc --showConfig, TypeScript will calculate the effective tsconfig.json and print it out. BigInt BigInts are a part of an upcoming ECMAScript proposal that allow modeling theoretically arbitrarily large integers. TypeScript 3.2 comes with type-checking for BigInts along with support for emitting BigInt literals when targeting esnext. Support for BigInt in TypeScript introduces a new primitive type called bigint. BigInt support is only available for the esnext target. Object.defineProperty declarations in JavaScript When writing in JavaScript files using allowJs, TypeScript 3.2 recognizes declarations that use Object.defineProperty. This means better completions, and stronger type-checking when enabling type-checking in JavaScript files. Improvements in error messages A few things have been added in TypeScript 3.2 that will make the language easier to use. Better missing property errors Better error spans in arrays and arrow functions Error on most-overlapping types in unions or “pick most overlappy type” Related spans on a typed this being shadowed A new warning message that says Did you forget a semicolon? on parenthesized expressions on the next line is added More specific messages are displayed when assigning to const/readonly bindings When extending complex types, more accurate messages are shown Relative module names are used in error messages Improved narrowing for tagged unions TypeScript now makes narrowing easier by relaxing rules for a discriminant property. The common properties of unions are now considered discriminants as long as they contain some singleton type and contain no generics. For example, a string literal, null, or undefined. Editing improvements The TypeScript project doesn’t have a compiler/type-checker. The core components of the compiler provide a cross-platform open-source language service that can power smart editor features. These features include go-to-definition, find-all-references, and a number of quick fixes and refactorings. Implicit any suggestions and “infer from usage” fixes noImplicitAny is a strict checking mode, and it helps ensure the code is as fully typed as possible. This also leads to a better editing experience. TypeScript 3.2 produces suggestions for most of the variables and parameters that would have been reported as having implicit any types. TypeScript provides a quick fix to automatically infer the types when an editor reports these suggestions. Other fixes There are two smaller quick fixes: A missing new is added when a constructor is called accidentally. An intermediate assertion is added to unknown when types are sufficiently unrelated. Improved formatting TypeScript 3.2 is smarter in formatting several different constructs. Breaking changes and deprecations TypeScript has moved more to generating DOM declarations in lib.d.ts by leveraging IDL files. Certain parameters no longer accept null or accept more specific types. Certain WebKit-specific properties have been deprecated. wheelDelta and friends have been removed as they are deprecated properties on WheelEvents. JSX resolution changes The logic for resolving JSX invocations has been unified with the logic for resolving function calls. This action has simplified the compiler codebase and improved certain use-cases. The further TypeScript releases will need Visual Studio 2017 or higher. For more details, visit the Microsoft Blog. Introducing ReX.js v1.0.0 a companion library for RegEx written in TypeScript Vue.js 3.0 is ditching JavaScript for TypeScript. What else is new? Babel 7 released with Typescript and JSX fragment support
Read more
  • 0
  • 0
  • 18040

article-image-facebook-introduces-rosetta-a-scalable-ocr-system-that-understands-text-on-images-using-faster-rcnn-and-cnn
Bhagyashree R
12 Sep 2018
3 min read
Save for later

Facebook introduces Rosetta, a scalable OCR system that understands text on images using Faster-RCNN and CNN

Bhagyashree R
12 Sep 2018
3 min read
Yesterday, researchers at Facebook introduced a machine learning system named, Rosetta for scalable optical character recognition (OCR). This model extracts text from more than a billion public Facebook and Instagram images and video frames. Then, this extracted text is fed into a text recognition model that has been trained on classifiers, which helps it understand the context of the text and the image together. Why Rosetta is introduced? Rosetta will help in the following scenarios: Provide a better user experience by giving users more relevant photo search results. Make Facebook more accessible for the visually impaired by incorporating the texts into screen readers. Help Facebook proactively identify inappropriate or harmful content. Help to improve the accuracy of classification of photos in News Feed to surface more personalized content. How it works? Rosetta consists of the following text extraction model: Source: Facebook Text extraction on an image is done in the following two steps: Text detection In this step, rectangular regions that potentially contain the text are detected. It performs text detection based on Faster R-CNN, a state-of-the-art object detection network. It uses Faster R-CNN but replaces ResNet convolutional body with a ShuffleNet-based architecture for efficiency reasons. The anchors in regional proposal network (RPN) are also modified to generate wider proposals, as text words are typically wider than the objects for which the RPN was designed. The whole detection system is trained jointly in a supervised, end-to-end manner. The model is bootstrapped with an in-house synthetic data set and then fine-tuned with human-annotated data sets so that it learns real-world characteristics. It is trained using the recently open-sourced Detectron framework powered by Caffe2. Text recognition The following image shows the architecture of the text recognition model: Source: Facebook In the second step, for each of the detected regions a convolutional neural network (CNN) is used to recognize and transcribe the word in the region. This model uses CNN based on the ResNet18 architecture, as this architecture is more accurate and computationally efficient. For training the model, finding what the text in an image says is considered as a sequence prediction problem. They input images containing the text to be recognized and the output generated is the sequence of characters in the word image. Treating the model as one of sequence prediction allows the system to recognize words of arbitrary length and to recognize the words that weren’t seen during training. This two-step model provides several benefits, including decoupling the training process of detection and recognition models, recognition of words in parallel, and independently supporting text recognition for different languages. Rosetta has been widely adopted by various products and teams within Facebook and Instagram. It offers a cloud API for text extraction from images and processes a large volume of images uploaded to Facebook every day. In future, the team is planning to extend this system to extract text from videos more efficiently and also support a wide number of languages used on Facebook. To get a more in-depth idea of how Rosetta works, check out the researchers’ post at Facebook code blog and also read this paper: Rosetta: Large Scale System for Text Detection and Recognition in Images. Why learn machine learning as a non-techie? Is the machine learning process similar to how humans learn? Facebook launches a 6-part Machine Learning video series
Read more
  • 0
  • 0
  • 18038

article-image-oracle-releases-virtualbox-6-0-0-with-improved-graphics-user-interface-and-more
Amrata Joshi
19 Dec 2018
2 min read
Save for later

Oracle releases VirtualBox 6.0.0 with improved graphics, user interface and more

Amrata Joshi
19 Dec 2018
2 min read
Yesterday, the team at Oracle released VirtualBox 6.0.0, a free and open-source hosted hypervisor for x86 computers. VirtualBox was initially developed by Innotek GmbH, which was then acquired by Sun Microsystems in 2008 and then by Oracle in 2010. VirtualBox is a virtualization product for enterprise as well as home use. It is an extremely feature rich, high-performance product for enterprise customers. Features of VirtualBox 6.0.0 User interface Virtual 6.0.0 comes with a greatly improved HiDPI and scaling support which includes better detection and per-machine configuration. User interface is simpler and more powerful. It also comes with a new file manager that enables users to control the guest file system and copy files between host and guest. Graphics VirtualBox 6.0.0 features 3D graphics support for Windows guests, and VMSVGA 3D graphics device emulation on Linux and Solaris guests. It comes with an added support for surround speaker setups. It also comes with added utility vboximg-mount on Apple hosts for accessing the content of guest disks on the host. In VirtualBox 6.0.0, there is an added support for Hyper-V to avoid the inability to run VMs at low performance. VirtualBox 6.0.0 comes with support for exporting a virtual machine to Oracle cloud infrastructure This release comes with a better application and virtual machine set-up Linux guests This release now supports Linux 4.20 and VMSVGA. The process of building vboxvideo on EL 7.6 standard kernel has been improved with this release. Other features Support for DHCP options. MacOS Guest initial support. Now it is possible to configure upto four custom ACPI tables for a VM. With this release, video and audio recordings can be separately enabled. Better support for attaching and detaching remote desktop connections. Major bug fixes The previous release used to throw wrong instruction after single-step exception with rdtsc. This issue has been resolved with this release. This release comes with improved audio/video recording. This issues with serial port emulation have been fixed. The resizing issue with disk images has been resolved. This release comes with an improved shared folder for auto-mounting. Issues with BIOS has been fixed. Read more about this news on VirtualBox’s changelog. Installation of Oracle VM VirtualBox on Linux Setting up a Joomla Web Server using Virtualbox, TurnkeyLinux and DynDNS How to Install VirtualBox Guest Additions
Read more
  • 0
  • 0
  • 18035
article-image-how-verizon-and-a-bgp-optimizer-caused-a-major-internet-outage-affecting-amazon-facebook-cloudflare-among-others
Savia Lobo
25 Jun 2019
5 min read
Save for later

How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others

Savia Lobo
25 Jun 2019
5 min read
Yesterday, many parts of the Internet faced an unprecedented outage as Verizon, the popular Internet transit provider accidentally rerouted IP packages after it wrongly accepted a network misconfiguration from a small ISP in Pennsylvania, USA. According to The Register, “systems around the planet were automatically updated, and connections destined for Facebook, Cloudflare, and others, ended up going through DQE and Allegheny, which buckled under the strain, causing traffic to disappear into a black hole”. According to Cloudflare, “What exacerbated the problem today was the involvement of a “BGP Optimizer” product from Noction. This product has a feature that splits up received IP prefixes into smaller, contributing parts (called more-specifics). For example, our own IPv4 route 104.20.0.0/20 was turned into 104.20.0.0/21 and 104.20.8.0/21”. Many Google users were unable to access the web using the Google browser. Some users say the Google Calendar went down too. Amazon users were also unable to use some services such as Amazon books, as users were unable to reach the site. Source: Downdetector Source:Downdetector Source:Downdetector Also, in another incident, on June 6, more than 70,000 BGP routes were leaked from Swiss colocation company Safe Host to China Telecom in Frankfurt, Germany, which then announced them on the global internet. “This resulted in a massive rerouting of internet traffic via China Telecom systems in Europe, disrupting connectivity for netizens: a lot of data that should have gone to European cellular networks was instead piped to China Telecom-controlled boxes”, The Register reports. BGP caused a lot of blunder in this outage The Internet is made up of networks called Autonomous Systems (AS), and each of these networks has a unique identifier, called an AS number. All these networks are interconnected using a  Border Gateway Protocol (BGP), which joins these networks together and enables traffic to travel from an ISP to a popular website at a far off location, for example. Source: Cloudflare With the help of BGP, networks exchange route information that can either be specific, similar to finding a specific city on your GPS, or very general, like pointing your GPS to a state. DQE Communications with an AS number AS33154, an Internet Service Provider in Pennsylvania was using a BGP optimizer in their network. It announced these specific routes to its customer, Allegheny Technologies Inc (AS396531), a steel company based in Pittsburgh. This entire routing information was sent to Verizon (AS701), who further accepted and passed this information to the world. “Verizon’s lack of filtering turned this into a major incident that affected many Internet services”, Cloudfare mentions. “What this means is that suddenly Verizon, Allegheny, and DQE had to deal with a stampede of Internet users trying to access those services through their network. None of these networks were suitably equipped to deal with this drastic increase in traffic, causing disruption in service” Job Snijders, an internet architect for NTT Communications, wrote in a network operators' mailing list, “While it is easy to point at the alleged BGP optimizer as the root cause, I do think we now have observed a cascading catastrophic failure both in process and technologies.” https://twitter.com/bgpmon/status/1143149817473847296 Cloudflare's CTO Graham-Cumming told El Reg's Richard Speed, "A customer of Verizon in the US started announcing essentially that a very large amount of the internet belonged to them. For reasons that are a bit hard to understand, Verizon decided to pass that on to the rest of the world." "but normally [a large ISP like Verizon] would filter it out if some small provider said they own the internet", he further added. “If Verizon had used RPKI, they would have seen that the advertised routes were not valid, and the routes could have been automatically dropped by the router”, Cloudflare said. https://twitter.com/eastdakota/status/1143182575680143361 https://twitter.com/atoonk/status/1143139749915320321 Rerouting is highly dangerous as criminals, hackers, or government-spies could be lurking around to grab such a free flow of data. However, this creates security distension among users as their data can be used for surveillance, disruption, and financial theft. Cloudflare was majorly affected by this outage, “It is unfortunate that while we tried both e-mail and phone calls to reach out to Verizon, at the time of writing this article (over 8 hours after the incident), we have not heard back from them, nor are we aware of them taking action to resolve the issue”, the company said in their blogpost. One of the users commented, “BGP needs a SERIOUS revamp with Security 101 in mind.....RPKI + ROA's is 100% needed and the ISPs need to stop being CHEAP. Either build it by Federal Requirement, at least in the Nation States that take their internet traffic as Citizen private data or do it as Internet 3.0 cause 2.0 flaked! Either way, "Path Validation" is another component of BGP that should be looked at but honestly, that is going to slow path selection down and to instrument it at a scale where the internet would benefit = not worth it and won't happen. SMH largest internet GAP = BGP "accidental" hijacks” Verizon in a statement to The Register said, "There was an intermittent disruption in internet service for some [Verizon] FiOS customers earlier this morning. Our engineers resolved the issue around 9 am ET." https://twitter.com/atoonk/status/1143145626516914176 To know more about this news in detail head over to CloudFlare’s blog. OpenSSH code gets an update to protect against side-channel attacks Red Badger Tech Director Viktor Charypar talks monorepos, lifelong learning, and the challenges facing open source software [Interview] Facebook signs on more than a dozen backers for its GlobalCoin cryptocurrency including Visa, Mastercard, PayPal and Uber
Read more
  • 0
  • 0
  • 18034

article-image-top-5-google-i-o-2018-conference-day-1-highlights-android-p-android-things-arcore-ml-kit-and-lighthouse
Sugandha Lahoti
10 May 2018
7 min read
Save for later

Top 5 Google I/O 2018 conference Day 1 Highlights: Android P, Android Things, ARCore, ML kit and Lighthouse

Sugandha Lahoti
10 May 2018
7 min read
Google I/O 2018, the most anticipated conference by Google kicked off yesterday at Shoreline Amphitheatre in Mountain View, California. Seems like it was just yesterday that Google I/O 2017 was over and we were still in awe of the new AI capabilities they announced last time but here we are, with the next annual I/O event in front of us. On the 1st day, CEO Sundar Pichai delivered the keynote promising a 3-day gala event for over 7,200 attendees with a plethora of announcements and updates to Google products. I/O’18 will conduct 400+ extended events in 85 countries. Artificial intelligence was a big theme throughout. Google showcased ML Kit, a SDK for adding Google’s machine learning smarts to Android and iOS apps. New features were added to Android P, Google’s most ambitious Android update. Not to mention the release of Lighthouse 3.0, new anchor tools for multiplayer AR, updates to Google assistant, Gmail, Google Maps and more. Here are our top picks from Day 1 of Google I/O 2018. Machine Learning for Mobile Developers Google’s newly launched ML Kit SDK, allows mobile developers to make use of Google’s machine learning expertise in the development of Android and iOS apps. This kit allows integration of mobile apps with a number of pre-built Google-provided machine learning models. These models support text recognition, face detection, barcode scanning, image labeling and landmark recognition, among other things. What stands out here is the fact that the ML Kit is available both online and offline, depending on network availability and the developer’s preference. In the coming months, Google plans to add a smart reply API and a high-density face contour feature for the face detection API, in the list of currently available APIs. New Augmented Reality experiences come to Android At the Google I/O conference, Google also announced several updates to its ARCore platform focused on overcoming the limitations of existing AR-enabled smartphones. Multi-User and shared AR New cloud anchor tools will enable developers to create new types of collaborative experiences, which can be shared with multiple users across both Android and iOS devices. More surfaces to play around with Vertical Plane Detection, a new feature of ARCore, allows users to place AR objects on more surfaces, like textured walls. Another capability, Augmented Images, brings images to life just by pointing a phone at them. https://www.youtube.com/watch?v=uDs9rd7yD0I Simple AR development New ARcore updates also simplify the process of AR development for Java developers with the introduction of Sceneform. Developers can now build immersive, 3D apps, optimized for mobile without having to learn complicated APIs like OpenGL. They can use Sceneform to build AR apps from scratch as well as to add AR features to existing ones. Android P: the most ambitious Android OS yet The name for the new version is yet to be decided but judging by their trend of naming the OS after a dessert it may be Pumpkin Pie, Peppermint Patty, Or Popsicle? I’m voting for Popsicle! Apart from the name, here are the other major features of the new OS: Jetpack: Jetpack is the next generation of the Android Support Library, redefining how developers write applications for Android. Jetpack manages tedious activities like background tasks, navigation, and lifecycle management, so developers can focus on core app development. Android KTX: In the last I/O conference, Google made Kotlin language a first-class citizen for developing Android apps. Continuing on the same trend, Google announced Android KTX in I/O’18. It is a part of Jetpack that further optimizes the Kotlin developer experience across libraries, tooling, runtime, documentation, and training. Android Studio 3.2: There are 20 major features in this release of Android Studio spanning from ultra-fast Android Emulator Snapshots and Sample Data in the Layout Editor, to a brand new Energy Profiler to measure battery impact of the app. Material Design 2: While other Google apps like Gmail and Tasks have already gotten a recent visual update, in Android P, Google is overhauling the OS’ overall look with what people are calling Material Design 2. Google calls it Material Themes, a powerful plugin to help designers implement Material Design in their apps. This new interface is designed to be “responsive and efficient,” while feeling “cohesive” with the rest of the G Suite family of apps. Adaptive Battery: Apart from refreshing the looks, Google has been busy thinking about improving performance. Google has partnered with its AI subsidiary DeepMind on a smart battery management system for Android. Scaling IoT with Android Things 1.0 After over 100,000 SDK downloads of the Developer Preview of Android Things, Google announced the long-term release of Android Things 1.0 to developers with long-term support for production devices. App Library, allows developers to manage APKs more easily without the need to package them together in a separate zipped bundle. Visual storage layout helps in configuring the device storage allocated to apps and data for each build and helps in getting an overview of how much storage your apps require. Group sharing, where product sharing has been extended to include support for Google Groups. Updated permissions, to give developers more control over the permissions used by apps on their devices. Developers can manage their Android Things devices via a cloud-based Android Things Console. Devices themselves can manage OS and app updates, view analytics for device health and performance, and issue test builds of the software package. Lighthouse 3.0 for better web optimization A new update to Lighthouse, the web optimization tool of Google, was also announced at Google I/O. Lighthouse 3.0 offers smaller waiting periods more updates to developers to efficiently optimize their websites and audit their performance. It uses Simulated throttling, with a new Lighthouse internal auditing engine, that runs audits under normal network and CPU settings, and then estimates how long the page would take to load under mobile conditions. Lighthouse 3.0 also features a new report UI along with invocation, scoring, audit, and output changes. Other highlights Google announced the rebranding of its Google Research division to Google AI. Google made a massive “continued conversation” update to Google Assistant with Google Duplex, a new technology that enables Google's machine intelligence–powered virtual assistant, to conduct a natural conversation with a human over the phone. Google has also announced the release of the third beta of Flutter. Flutter is Google’s mobile app SDK used for creating high-quality, native user experiences on mobile. Google Photos get more AI-powered fixes such as B&W photo colorization, brightness correction and suggested rotations. Google’s first Smart Displays, the screen-enriched smart speakers, will launch in July, powered by Google Assistant and YouTube. Google Assistant is coming to Google Maps, available on iOS and Android. There are still 2 more days left for Google I/O to conclude and going by day 1 announcements, I can’t wait to see what’s next. I am especially looking forward to knowing more about Android Auto, Google’s Tour Creator,  and Google Lens. You can view the Livestream and other sessions on the Google I/O conference page. Keep visiting Packt Hub for more updates on Google I/O, Microsoft Build and other key tech conferences happening this month. Google’s Android Things, developer preview 8: First look Google open sources Seurat to bring high precision graphics to Mobile VR Microsoft Build 2018 Day 1: Azure meets Artificial Intelligence
Read more
  • 0
  • 0
  • 18027

article-image-rigettis-128-qubit-chip-quantum-computer
Fatema Patrawala
16 Aug 2018
3 min read
Save for later

Rigetti plans to deploy 128 qubit chip Quantum computer

Fatema Patrawala
16 Aug 2018
3 min read
Rigetti computers are committed to building the world’s most powerful computers and they believe the true value of quantum will be unlocked by practical applications. Rigetti CEO Chad Rigetti, posted recently on Medium about their plans to deploy 128 qubit chip quantum computing system, challenging Google, IBM, and Intel for leadership in this emerging technology. They have planned to deploy this system in the next 12 months and shared their investment in resources at the application layer to encourage experimentation on quantum computers. Over the past year, Rigetti has built 8-qubit and 19-qubit superconducting quantum processors, which are accessible to users over the cloud through their open source software platform Forest. These chips have been useful in helping researchers around the globe to carry out and test programs on their quantum-classical hybrid computers. However, to drive practical use of quantum computing today, Rigetti must be able to scale and improve the performance of the chips and connect them to the electronics on which they run . To achieve this, the next phase of quantum computing will require more power at the hardware level to drive better results. Rigetti is in a unique position to solve this problem and build systems that scale. Chad Rigetti adds, “Our 128-qubit chip is developed on a new form factor that lends itself to rapid scaling. Because our in-house design, fab, software, and applications teams work closely together, we’re able to iterate and deploy new systems quickly. Our custom control electronics are designed specifically for hybrid quantum-classical computers, and we have begun integrating a 3D signaling architecture that will allow for truly scalable quantum chips. Over the next year, we’ll put these pieces together to bring more power to researchers and developers.” While they are focussed on building the 128 qubit chip, the Rigetti team is also looking at ways to enhance the application layer by pursuing quantum advantage in three areas; i.e. quantum simulation, optimization and machine learning. The team believes quantum advantage will be achieved by creating a solution that is faster, cheaper and of a better quality. They have posed an open question as to which industry will build the first commercially useful application to add tremendous value to researchers and businesses around the world. Read the full coverage on the Rigetti Medium post. Quantum Computing is poised to take a quantum leap with industries and governments on its side Q# 101: Getting to know the basics of Microsoft’s new quantum computing language PyCon US 2018 Highlights: Quantum computing, blockchains and serverless rule!
Read more
  • 0
  • 0
  • 18022
article-image-what-if-buildings-of-the-future-could-compute-european-researchers-make-a-proposal
Prasad Ramesh
23 Nov 2018
3 min read
Save for later

What if buildings of the future could compute? European researchers make a proposal.

Prasad Ramesh
23 Nov 2018
3 min read
European researchers have proposed an idea for buildings that could compute. In the paper On buildings that compute. A proposal published this week, they have made proposals to integrate computation in various parts of a building, from cement and bricks to paint. What is the idea about? Smart homes today are made up of several individual smart appliances. They may work individually or be interconnected via a central hub. “What if intelligent matter of our surrounding could understand us humans?” The idea is that the walls of a building in addition to supporting the roof, had more functionality like sensing, calculating, communicating, and even producing power. Each brick/block could be thought of as a decentralized computing entity. These blocks could contribute to a large-scale parallel computation. This would transform a smart building into an intelligent computing unit in which people can live in and interact with. Such smart buildings that compute, as the researchers say can potentially offer protection from crime, natural disasters, structural damage within the building, or simply send a greeting to the residing people. When nanotechnology meets embedded computing The proposal involves using nanotechnology to embed computation and sensing directly to the construction materials. This includes intelligent concrete blocks and using stimuli-responsive smart paint. The photo sensitive paint would sense the internal and external environment. A nano-material infused concrete composition would sense the building environment to implement parallel information processing on a large scale. This will result in distributed decision making. The result is a building which can be seen as a huge parallel computer consisting of computing concrete blocks. The key concepts used for the idea of smart buildings that compute are functional nanoparticles which are photo-, chemo- and electro-sensitive. A range of electrical properties will span all the electronic elements mixed in a concrete. The concrete is used to make the building blocks which are equipped with processors. These processors gather information from distributed sensory elements, helps in decision making, location communication and enables advanced computing. The blocks together form a wall which forms a huge parallel array processor. They envision a single building or a small colony to turn into a large-scale universal computing unit.  This is an interesting idea, bizarre even. But the practicality of it is blurry. Can its applications justify the cost involved to create such a building? There is also a question of sustainability. How long will the building last before it has to be redeveloped? I for one think that doing so will almost certainly undo the computational aspect from it. For more details, read the research paper. Home Assistant: an open source Python home automation hub to rule all things smart The iRobot Roomba i7+ is a cleaning robot that maps and stores your house and also empties the trash automatically. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration
Read more
  • 0
  • 0
  • 18022

article-image-mozilla-engineer-shares-the-implications-of-rewriting-browser-internals-in-rust
Bhagyashree R
01 Mar 2019
2 min read
Save for later

Mozilla engineer shares the implications of rewriting browser internals in Rust

Bhagyashree R
01 Mar 2019
2 min read
Yesterday, Diane Hosfelt, a Research Engineer at Mozilla, shared what she and her team experienced when rewriting Firefox internals in Rust. Taking Quantum CSS as a case study, she touched upon the potential security vulnerabilities that could have been prevented if it was written in Rust from the very beginning. Why Mozilla decided to rewrite Firefox internal in Rust? Quantum CSS is a part of Mozilla’s Project Quantum, under which it is rewriting Firefox internals to make it faster. One of the major parts of this project is Servo, an engine designed to provide better concurrency and parallelism. To achieve these goals Mozilla decided to rewrite Servo in Rust, replacing C++. Rust is very similar to C++ in some ways while being different in terms of the abstractions and data structures it uses. It was created by Mozilla keeping concurrency safety in mind. Its type and memory-safe property make programs written in Rust thread-safe. What type of bugs does Rust prevent? Overall Rust prevents bugs related to memory, bounds, null/uninitialized variables, or integer by default. Hosfelt mentioned in her blog post, “Due to the overlap between memory safety violations and security-related bugs, we can say that Rust code should result in fewer critical CVEs (Common Vulnerabilities and Exposures).” However, there are some types of bugs that Rust does not address like correctness bugs. According to Hosfelt, Rust is a good option in the following cases: When your program involves processing of untrusted input safely When you want to use parallelism for better performance When you are integrating isolated components into an existing codebase You can go through the blog post by Diane Hosfelt on Mozilla’s website. Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Scroll to understand consumer attitudes for an ad-free experience on the web Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant
Read more
  • 0
  • 0
  • 18021
Modal Close icon
Modal Close icon