Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-aws-announces-open-distro-for-elasticsearch-licensed-under-apache-2-0
Savia Lobo
12 Mar 2019
4 min read
Save for later

AWS announces Open Distro for Elasticsearch licensed under Apache 2.0

Savia Lobo
12 Mar 2019
4 min read
Amazon Web Services announced a new open source distribution of Elasticsearch named Open Distro for Elasticsearch in collaboration with Expedia Group and Netflix. Open Distro for Elasticsearch will be focused on driving innovation with value-added features to ensure users have a feature-rich option that is fully open source. It provides developers with the freedom to contribute to open source value-added features on top of the Apache 2.0-licensed Elasticsearch upstream project. The need for Open Distro for Elasticsearch Elasticsearch’s Apache 2.0 license enabled it to gain adoption quickly and allowed unrestricted use of the software. However, since June 2018, the community witnessed significant intermix of proprietary code into the code base. While an Apache 2.0 licensed download is still available, there is an extreme lack of clarity as to what customers who care about open source are getting and what they can depend on. “Enterprise developers may inadvertently apply a fix or enhancement to the proprietary source code. This is hard to track and govern, could lead to a breach of license, and could lead to immediate termination of rights (for both proprietary free and paid).” Individual code commits also increasingly contain both open source and proprietary code, making it difficult for developers who want to only work on open source to contribute and participate. Also, the innovation focus has shifted from furthering the open source distribution to making the proprietary distribution popular. This means that the majority of new Elasticsearch users are now, in fact, running proprietary software. “We have discussed our concerns with Elastic, the maintainers of Elasticsearch, including offering to dedicate significant resources to help support a community-driven, non-intermingled version of Elasticsearch. They have made it clear that they intend to continue on their current path”, the AWS community states in their blog. These changes have also created uncertainty about the longevity of the open source project as it is getting less innovation focused. Customers also want the freedom to run the software anywhere and self-support at any point in time if they need to. Thus, this has led to the creation of Open Distro for Elasticsearch. Features of Open Distro for Elasticsearch Keeps data security in check Open Distro for Elasticsearch protects users’ cluster by providing advanced security features, including a number of authentication options such as Active Directory and OpenID, encryption in-flight, fine-grained access control, detailed audit logging, advanced compliance features, and more. Automatic notifications Open Distro for Elasticsearch provides a powerful, easy-to-use event monitoring and alerting system. This enables a user to monitor data and send notifications automatically to their stakeholders. It also includes an intuitive Kibana interface and powerful API, which further eases setting up and managing alerts. Increased SQL query interactions It also allows users who are already comfortable with SQL to interact with their Elasticsearch cluster and integrate it with other SQL-compliant systems. SQL offers more than 40 functions, data types, and commands including join support and direct export to CSV. Deep Diagnostic insights with Performance Analyzer Performance Analyzer provides deep visibility into system bottlenecks by allowing users to query Elasticsearch metrics alongside detailed network, disk, and operating system stats. Performance Analyzer runs independently without any performance impact even when Elasticsearch is under stress. According to AWS Open Source Blog, “With the first release, our goal is to address many critical features missing from open source Elasticsearch, such as security, event monitoring and alerting, and SQL support.” Subbu Allamaraju, VP Cloud Architecture at Expedia Group, said, “We are excited about the Open Distro for Elasticsearch initiative, which aims to accelerate the feature set available to open source Elasticsearch users like us. This initiative also helps in reassuring our continued investment in the technology.” Christian Kaiser, VP Platform Engineering at Netflix, said, “Open Distro for Elasticsearch will allow us to freely contribute to an Elasticsearch distribution, that we can be confident will remain open source and community-driven.” To know more about Open Distro for Elasticsearch in detail, visit AWS official blog post. GitHub releases Vulcanizer, a new Golang Library for operating Elasticsearch Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes How does Elasticsearch work? [Tutorial]
Read more
  • 0
  • 0
  • 15659

article-image-ces-2018-highlights
Savia Lobo
08 Jan 2018
6 min read
Save for later

What we learned from CES 2018: Self-driving cars and AI chips are the rage!

Savia Lobo
08 Jan 2018
6 min read
The world’s biggest consumer technology show is here! Presenting CES 2018 that commenced last weekend. This new year, multiple tech firms such as LG, Sony, Samsung have launched brand new OLED screen televisions, smart laptops, speakers, and so on with next-gen technologies for their consumers. To know about these in detail, you can visit the link here. In this article, we explore how tech giants such as Nvidia, Intel, and AMD have leveraged AI and ML to launch next-gen products. Let’s take a brief look at each one’s contribution at the CES 2018. Nvidia Highlights at CES 2018 Nvidia unveiled their Xavier SoC(System on a Chip) autonomous machine intelligence processors at CES this year. The Xavier has over 9 billion transistors with a custom 8-core CPU, a 512-core Volta GPU, an 8K HDR video processor, a deep-learning accelerator and new computer-vision accelerators. With all these huge figures, Xavier can crunch more sensor and vehicle data for the AI systems that will power self-driving vehicles. The other striking features of this SoC are, it can perform 30 trillion operations per second using only 30 watts of power, and is 15 times more efficient than the previous architecture. Nvidia also announced three new variants of its DRIVE AI platform. These new variants are based around Xavier SoCs. The three variants include: Drive AR focuses on getting Augmented Reality into vehicles, which can enhance and transform the driving experience. It offers developers with an SDK, which will further enable them to build experiences that leverage computer vision, graphics and artificial intelligence capabilities to do things like overlay information about road conditions, points of interest and other real-world locations using interactive in-car displays. Drive IX would formulate an easy way to build and deploy in-car AI assistants. These assistants will be capable of incorporating both interior and exterior sensor data to interact not only with drivers but also with passengers on the road. The third DRIVE AI-based platform is a revision of its existing autonomous taxi brain, Pegasus. This new version improves on the previously revealed preproduction edition by compiling two Xavier SoCs with two Nvidia GPUs into a package that’s roughly the size of a license plate – down from the trunk-filling physical footprint of the original. Nvidia also announced that it is partnering with two Chinese companies Baidu and automaker ZF, for bringing autonomous driving to roads. Nvidia’s CEO Jensen Huang stated that Nvidia’s Drive Xavier auto compute platform would be used for Baidu’s Apollo Project. The Apollo project offers an open platform for self-driving cars in partnership with a wide variety of automakers, suppliers and tech companies. Huang also revealed that Nvidia will be supplying its self-driving computer hardware to Aurora, a Google start-up. Aurora would build self-driving systems for both Volkswagen and Hyundai, the startup revealed last week. Also, Uber has chosen Nvidia as one of its key technology partners in its fleet of self-driving, specifically to provide the AI computing aspects of its autonomous software. Uber has used Nvidia’s GPUs in both its self-driving ride-hailing test fleet and in its self-driving transport trucks, which are also developed by its Advanced Technologies Group. Intel Highlights at CES 2018 Intel in collaboration with AMD has unveiled new processors with the help of AMD’s Radeon RX Vega M graphics. These new core processors are Intel’s first CPU with discrete graphics included in a single package. This leads to an incredibly thin and lightweight laptops and desktops that are able to provide an impressive gaming performance with an added 4K media streaming. As per Intel, these chips would be the first example of power-sharing across CPU and GPU, the first consumer mobile chips to use HBM2 (the second-generation high bandwidth memory, a faster type of graphics memory), and also the first consumer solution to use Intel EMIB(Embedded Multi-die Interconnect Bridge). To know more about this in detail please visit the link given here. At CES 2018, Intel unveiled its new mini-PC NUC system, formerly codenamed Hades Canyon. This system aims at premium virtual reality (VR) applications. The system comes in two versions, the NUC8i7HVK and the NUC8i7HNK. The NUC8i7HVK: comes with Radeon RX Vega M GH graphics can operate from 1,063MHz to 1,190MHz It has an 8th-gen quad-core 100W Intel Core i7-8809G 3.1GHz with 4.2GHz turbo mode, and is "unlocked and VR-capable". The NUC8i7HNK: comes with Radeon RX Vega M GL graphics with an operating range of 931MHz-1,011MHz. It also has a 65W quad-core 8th-gen Intel Core i7-8705G 3.1GHz CPU with 4.1GHz turbo mode. To know more about this news in detail, visit the link here. AMD Highlights at CES 2018 AMD announced its brand new Ryzen 3 2300U APU chips specifically designed for affordable laptops and Chromebooks. The Ryzen 3 2300U is a full-featured chip featuring 4 cores and 4 threads clocked at a base 2.0GHz and boost 3.4GHz. Its APU comes with full-on Radeon RX Vega graphics powered by six compute units. In addition to the dual-core, Ryzen 3 2200U runs with 4 threads at a standard 2.5GHz frequency that boosts up to 3.4GHz. It also features Radeon RX Vega graphics similar to other APUs in the family but requires only three compute units to power it. AMD announced a new set of Ryzen chips for desktops i.e desktop Ryzen APUs in order to replace its ongoing Athlon chips. AMD’s new APUs are based on the Raven Ridge Architecture, and is a combination of an updated version of Ryzen processor with “discrete-class” Radeon RX Vega graphics. AMD has introduced two chips: Ryzen 5 2400G APU includes 4 cores and 8 threads clocked at a base 3.6Ghz and is boosted with 3.9GHz. On top of the processor, this new chip features Radeon RX Graphics with 11 compute units for playable gaming experiences at 1080p and high-quality settings.  Ryzen 3 2200G is rated for 3.5GHz base and 3.7GHz boost clock speeds. This entry-level APU also comes outfitted with 4 cores, but only 4 threads, as well as just 8, compute units attached to its Radeon RX Vega GPU. AMD also spoke about its new Ryzen 2 which would hit the market around April this year, which would have: A new 12nm Zen architecture, which out-smalls the 14nm transistors of Intel Coffee Lake. This upcoming chip brings higher clock speeds and Precision Boost 2 technology for greater performance and efficiency. To know more about this news in detail, click on the link here. Apart from well-known names such as Nvidia, Intel, and AMD, Ceva, the leading licensor of Signal processing platforms and AI processors, unveiled NeuPro. NeuPro is a powerful and specialized Artificial Intelligence (AI) processor family for deep learning inference at the edge. It is designed for edge device vendors who can quickly take advantage of the significant possibilities that deep neural network technologies offer. NeuPro extends the use of AI beyond machine vision to new edge-based applications including natural language processing, real-time translation, authentication, workflow management, and many other learning-based applications. With 4 more days to go, many such advancements are expected to be announced at the CES 2018. Watch this space in the coming days for more.
Read more
  • 0
  • 0
  • 15643

article-image-baidu-releases-a-new-ai-translation-system-stacl-that-can-do-simultaneous-interpretation
Sugandha Lahoti
24 Oct 2018
3 min read
Save for later

Baidu releases a new AI translation system, STACL, that can do simultaneous interpretation

Sugandha Lahoti
24 Oct 2018
3 min read
Baidu has released a new AI-powered tool called STACL, that performs simultaneous interpretation. A simultaneous interpreter performs translation concurrently with the speaker’s speech, with a delay of only a few seconds. However, Baidu has taken a step ahead by predicting and anticipating the words a speaker is about to say a few seconds in the future. Current translation systems are generally prone to latency such as “3-word delay” and their systems are overcomplicated and slow to train. Baidu’s STACL overcomes these limitations by predicting the verb to come, based on all the sentences it has seen in the past. The system uses a simple “wait-k” model trained to generate the target sentence concurrently with the source sentence, but always k words behind, for any given k. STACL directly predicts target words, and seamlessly integrates anticipation and translation in a single model. STACL is also flexible in terms of the latency-quality trade-off, where the user can specify any arbitrary latency requirements (e.g., one-word delay or five-word delay). Presently, STACL works on text-to-text translation and speech-to-text translation. The model is trained on newswire articles, where the same story appeared in multiple languages. In the paper, the researchers demonstrated its capabilities in translating from Chinese to English. Source: Baidu They have also come up with a new metric of latency called “Averaged Lagging”, which addresses deficiencies in previous metrics. The system is of course, far from perfect. For instance, at present, it can’t correct its mistakes or apologize for it. However,  it is adjustable in the sense that users will be able to make trade-offs between speed and accuracy. It can also be made more accurate by training it in a particular subject so that it understands the likely sentences that will appear in presentations related to that subject. The researchers are also planning to include speech-to-speech translation capabilities in STACL. To do this, they will need to integrate speech synthesis into the system while trying to make it sound natural. According to Liang Huang, principal scientist of Baidu’s Silicon Valley AI Lab, “STACL will be demoed at a Baidu World conference on November 1st, where it will provide a live simultaneous translation of the speeches. Baidu has previously shown off a prototype consumer device that does sentence-by-sentence translation,” and Huang says “his team plans to integrate STACL into that gadget.” Go through the research paper and video demos for extensive coverage. Baidu announces ClariNet, a neural network for text-to-speech synthesis. Baidu Security Lab’s MesaLink, a cryptographic memory safe library alternative to OpenSSL. Baidu releases EZDL – a platform that lets you build AI and machine learning models without any coding knowledge
Read more
  • 0
  • 0
  • 15639

article-image-u-s-senator-introduces-a-bill-that-levies-jail-time-and-hefty-fines-for-companies-violating-data-breaches
Savia Lobo
11 Feb 2019
3 min read
Save for later

U.S. Senator introduces a bill that levies jail time and hefty fines for companies violating data breaches

Savia Lobo
11 Feb 2019
3 min read
Online privacy abuse, these days, is under a check with different legislation passed for user data safety. Last week, Democratic Senator, Ron Wyden introduced a new bill that would allow Federal Trade Commission the authority to establish privacy and cybersecurity standards. Additionally, the bill levies a jail time, and a billion dollar fine on the biggest tech companies if their companies steal and sell user data, or allow a massive data breach to occur at their company. Read Also: A brief list of drafts bills in US legislation for protecting consumer data privacy In an interview with The Oregonian/OregonLive, Wyden said, “The point is the Federal Trade Commission on privacy issues thus far has basically been toothless. I am trying to recreate this agency for the digital era.” Provisions provided by the bill A ‘Do Not Track’ option The bill would establish a ‘do not track’ option for people using online services. In lieu of allowing their search history, social media favorites and online activity to be sold to advertisers, people could opt to pay an unspecified fee to preserve their privacy. An annual report to be submitted by big companies The bill would allow the FTC to establish privacy and cybersecurity standards and require big companies to report annually on their privacy practices. Penalty if false information is submitted Penalize large companies that submit false information in their annual privacy report. Penalties could amount to 4 percent of annual revenue – a number that could run in the billions of dollars for the biggest social media companies. Executives could face jail time up to 20 years. Assessment of algorithms The bill stated that big companies would be required to provide assess their algorithms for accuracy, fairness, bias, and discrimination. According to The Oregonian/OregonLive, Wyden “introduced the bill last fall and it has made little headway in the intervening months. But he’s hoping persistent consumer outrage about privacy violations could give it additional traction, coupled with support from within the tech industry itself.” “What we are essentially advocating is what the big financial services firms have to do under Sarbanes-Oxley,” Wyden said. David Hoffman, Intel’s associate general counsel and global privacy officer, said, “The bill is a tremendous step towards effective comprehensive U.S. privacy legislation. Providing more authority and resources to the US Federal Trade Commission is a critical foundation for robust privacy protection.” Ring of Fire’s Farron Cousins explains why this bill is necessary, in their YouTube video. https://www.youtube.com/watch?v=WhB7_4sxff8 Lawmakers introduce new Consumer privacy bill and Malicious Deep Fake Prohibition Act to support consumer privacy and battle deepfakes The Collections #2-5 leak of 2.2 billion email addresses might have your information, German news site, Heise reports Australia’s Assistance and Access (A&A) bill, popularly known as the anti-encryption law, opposed by many including the tech community
Read more
  • 0
  • 0
  • 15636

article-image-googles-secret-operating-system-fuchsia-will-run-android-applications-9to5google-report
Melisha Dsouza
04 Jan 2019
3 min read
Save for later

Google’s secret Operating System ‘Fuchsia’ will run Android Applications: 9to5Google Report

Melisha Dsouza
04 Jan 2019
3 min read
Google’s secret operating system in the works and a potential Android replacement will use the Android runtime to run Android apps. On 2nd January, an evidence for the same was spotted by 9to5Google, who found a new change in the Android Open Source Project that will use a special version of ART to run Android applications. This feature would enable devices with Fuchsia —i.e. smart devices including mobile phones, tablets, computers, wearables, and other gadgets— to take advantage of Android apps in the Google Play Store. Last month, the same site had reported two new Fuchsia-related repositories that were added to the Android Open Source Project (AOSP) manifest: “platform/prebuilts/fuchsia_sdk” and “device/google/fuchsia”. In a new change posted to Android’s Gerrit source code management, Google has included a README file that indicates what the latter repository is intended for: Source: 9to5Google The above snippet from the README file means that Fuchsia will use a specially designed version of the Android Runtime to run Android applications and installable on any Fuchsia device using a .far file. Google has not listed the exact details on how Fuchsia will use the Android Runtime. What we know about Project Fuchsia so far According to a Bloomberg report, Google engineers have been working on this project for the past two years in the hope that project Fuchsia will replace the now dominant Android operating system. Google started posting the code for this project 2 years before, and have been working on the project ever since. Fuchsia, is being designed to overcome the limitations of Android with better voice interactions and frequent security updates for devices. In the software code posted online, the engineers built encrypted user keys into the system to ensure  information is protected every time the software is updated. Bloomberg stated the main aim of designing Fuchsia- according to people familiar with the project- as ‘creating a single operating system capable of running all the company’s in-house gadgets’. These include devices like Pixel phones and smart speakers, as well as third-party devices relying on Android and other systems like Chrome OS. Some engineers also told Bloomberg that engineers that they want to embed Fuchsia on connected home devices, like  voice-controlled speakers, and then move on to larger machines such as laptops. Ultimately aspiring to swap in their system for Android. You can head over to ZDNet for more insights to this news. Alternatively, check out 9to5Google for more information on this announcement. Hacker duo hijacks thousands of Chromecasts and Google smart TVs to play PewDiePie ad, reveals bug in Google’s Chromecast devices! ‘Istio’ available in beta for Google Kubernetes Engine, will accelerate app delivery and improve microservice management Project Fi is now Google Fi, will support multiple Android based phones, offer beta service for iPhone  
Read more
  • 0
  • 0
  • 15629

article-image-google-open-sources-active-question-answering-activeqa-a-reinforcement-learning-based-qa-system
Natasha Mathur
15 Oct 2018
3 min read
Save for later

Google open sources Active Question Answering (ActiveQA), a Reinforcement Learning based Q&A system

Natasha Mathur
15 Oct 2018
3 min read
Google announced last week, that it’s open-sourcing Active Question Answering (ActiveQA), a research project that involves training artificial agents for question answering using reinforcement learning. As this research project is now open source, Google has released a TensorFlow package for ActiveQA system. The latest TensorFlow ActiveQA package comprises three main components along with the code necessary to train and run the ActiveQA agent. First component is a pre-trained sequence to sequence model which takes a question as an input and returns its reformulations. Second component is an answer selection model that uses a convolutional neural network and gives a score to each triplet of the original question, reformulation, and answer. The selector makes use of the pre-trained, and publicly available word embeddings (GloVe). Third component is a question answering system (the environment) that uses BiDAF, a popular question answering system.The TensorFlow package also consists of all the code that is necessary to train and run the ActiveQA agent. “ActiveQA system.. learns to ask questions that lead to good answers. However, because training data in the form of question pairs, with an original question and a more successful variant, is not readily available, ActiveQA uses reinforcement learning, an approach to machine learning concerned with training agents so that they take actions that maximize a reward, while interacting with an environment”, reads the Google AI blog. This concept of ActiveQA was first Introduced in Google’s ICLR 2018 paper “Ask the Right Questions: Active Question Reformulation with Reinforcement Learning”. ActiveQA is far different in its approach than the traditional QA systems.Traditional QA systems make use of supervised learning techniques that are used along with labeled data to train a system. This system is capable of answering the arbitrary input questions, however,  it doesn’t come with an ability to deal with uncertainty as humans would. For instance, It is not able to reformulate the questions, issue multiple searches, and evaluate the responses. This leads to poor quality answers. ActiveQA, on the other hand, comprises an agent that consults the QA system repeatedly. This agent reformulates the original question many times which helps it select the best answer. Each of the questions reformulated is evaluated on the basis of how good the corresponding answer to that question is. If the corresponding answer is good, then the learning algorithm adjusts the model’s parameters accordingly. So, the question reformulation that led to the right answer would more likely be generated again. The ActiveQA approach allows the agent to involve in a dynamic interaction with the QA system, which leads to better quality of the returned answers. ActiveQA As per an example mentioned by Google, if you consider a question “When was Tesla born?”. The agent will reformulate the question in two different ways. One of them being “When is Tesla’s birthday” and the other one as “Which year was Tesla born”. This will help it retrieve the answers to both of the questions from the QA system. Once the systems use all this information, it collectively returns the answer as “July 10, 1856”. ActiveQA “We envision that this research will help us design systems that provide better and more interpretable answers, and hope it will help others develop systems that can interact with the world using natural language”, mentions Google. For more information, read the official Google AI blog. Google, Harvard researchers build a deep learning model to forecast earthquake aftershocks location with over 80% accuracy Google strides forward in deep learning: open sources Google Lucid to answer how neural networks make decisions Google moving towards data centers with 24/7 carbon-free energy
Read more
  • 0
  • 0
  • 15614
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-cloudflare-suffers-2nd-major-internet-outage-in-a-week-this-time-due-to-globally-deploying-a-rogue-regex-rule
Savia Lobo
03 Jul 2019
4 min read
Save for later

Cloudflare suffers 2nd major internet outage in a week. This time due to globally deploying a rogue regex rule.

Savia Lobo
03 Jul 2019
4 min read
For the second time in less than a week, Cloudflare was part of the major internet outage affecting many websites for about an hour, yesterday due to a software glitch. Last week, Cloudflare users faced a major outage when Verizon accidentally rerouted IP packages after it wrongly accepted a network misconfiguration from a small ISP in Pennsylvania, USA. Cloudflare’s CTO John Graham-Cumming wrote yesterday’s outage was due to a massive spike in CPU utilization in the network. Source: Cloudflare Many users complained of seeing "502 errors" displayed in their browsers when they tried to visit its clients. Downdetector, the website which updates users of the ongoing outages, service interruptions also flashed a 502 error message. https://twitter.com/t_husoy/status/1146058460141772802 Graham-Cumming wrote, “This CPU spike was caused by a bad software deploy that was rolled back. Once rolled back the service returned to normal operation and all domains using Cloudflare returned to normal traffic levels”. A single misconfigured rule, the actual cause of the outage What must have been the cause of the outage is a single misconfigured rule within the Cloudflare Web Application Firewall (WAF), deployed during a routine deployment of new Cloudflare WAF Managed rules. Though the company has automated systems to run test suites and a procedure for deploying progressively to prevent incidents, these WAF rules were deployed globally in one go and caused yesterday’s outage. https://twitter.com/mjos_crypto/status/1146168236393807872 These new rules were to improve the blocking of inline JavaScript that is used in attacks. “Unfortunately, one of these rules contained a regular expression that caused CPU to spike to 100% on our machines worldwide. This 100% CPU spike caused the 502 errors that our customers saw. At its worst traffic dropped by 82%”, Graham-Cumming writes. After finding out the actual cause of the issue, Cloudflare issued a ‘global kill’ on the WAF Managed Rulesets, which instantly dropped CPU back to normal and restored traffic at 1409 UTC. They also ensured that the problem was fixed correctly and re-enabled the WAF Managed Rulesets at 1452 UTC. https://twitter.com/SwiftOnSecurity/status/1146260831899914247 “Our testing processes were insufficient in this case and we are reviewing and making changes to our testing and deployment process to avoid incidents like this in the future”, the Cloudflare blog states. A user said Cloudflare should have been careful of rolling out the feature globally while it was staged for a rollout. https://twitter.com/copyconstruct/status/1146199044965797888 Cloudflare confirms the outage was ‘a mistake’ and not an attack Cloudflare also received speculations that this outage was caused by a DDoS from China, Iran, North Korea, etc. etc, which Graham-Cumming tweeted were untrue and “It was not an attack by anyone from anywhere”. CloudFare’s CEO, Matthew Prince, also confirmed that the outage was not a result of the attack but a “mistake on our part.” https://twitter.com/jgrahamc/status/1146078278278635520 Many users have applauded that Cloudflare has accepted the fact that it was an organizational / engineering management issue and not an individual’s fault. https://twitter.com/GossiTheDog/status/1146188220268470277 Prince told Inc., “I'm not an alarmist or a conspiracy theorist, but you don't have to be either to recognize that it is ultimately your responsibility to have a plan. If all it takes for half the internet to go dark for 20 minutes is some poorly deployed software code, imagine what happens when the next time it's intentional.” To know more about this news in detail, read Cloudflare’s official blog. A new study reveals how shopping websites use ‘dark patterns’ to deceive you into buying things you may not want OpenID Foundation questions Apple’s Sign In feature, says it has security and privacy risks Email app Superhuman allows senders to spy on recipients through tracking pixels embedded in emails, warns Mike Davidson
Read more
  • 0
  • 0
  • 15486

article-image-amazon-rekognition-face-detection-recognize-text-in-images
Abhishek Jha
22 Nov 2017
4 min read
Save for later

Amazon Rekognition can now 'recognize' faces in a crowd at real-time

Abhishek Jha
22 Nov 2017
4 min read
According to Mary Meeker’s 2016 Internet trends report, we are now sharing a staggering 3.25+ billion digital photos every day. In the era of smartphones, the challenge for organizations is to index and interpret this data. Amazon tried to solve this problem with its deep learning-powered Rekognition service which it unveiled at last year’s AWS re:invent conference. By June this year, Amazon Rekognition had become a lot more smarter, recognizing celebrities across politics, sports, business, entertainment and media. Now Rekognition has truly arrived – it can ‘recognize’ textual images and faces at real time! Amazon has infused three new features into the service: detection and recognition of text in images; real-time face recognition across tens of millions of faces; and detection of up to 100 faces in challenging crowded photos. The new functionalities, Amazon claims, make Rekognition “10% more accurate” for face verification and identification. Text in Image Being able to detect text in images is, in fact, one of the most anticipated features that have got added into Rekognition. Customers have been pressing about recognizing text embedded in images, such as street signs and license plates captured by traffic cameras, news, and captions on TV screens, or stylized quotes overlaid on phone-captured family pictures. Well the system can now recognize and extract textual content from images. Interestingly, the Amazon Web Services announced that Text in Image is specifically built to work with real-world images rather than document images. “For example, in image sharing and social media applications, you can now enable visual search based on an index of images that contain the same keywords. In media and entertainment applications, you can catalogue videos based on relevant text on screen, such as ads, news, sport scores, and captions. Additionally, in security and safety applications, you can identify vehicles based on license plate numbers from images taken by street cameras,” AWS said in its official release. The Text in Image feature supports text in most Latin scripts and numbers embedded in a large variety of layouts, fonts, and styles, and overlaid on background objects at various orientation as banners and posters. Face Search and Detection With Amazon Rekognition, customers can now perform real-time face searches against collections of millions of faces. “This represents a 5-10X reduction in search latency, while simultaneously allowing for collections that can store 10-20X more faces than before,” AWS said. The face search feature can truly prove to be a boon in security and safety applications – for timely and accurate crime prevention – where the suspects can be identified against a collection of millions of faces in near real-time. On top of all that, Rekognition now allows you to detect, analyze and index up to 100 different faces in a single photograph (recall that the previous cap was 15). This means customers can now feed Amazon Rekognition a shot of a crowd of people and get the information in return regarding the demographics and sentiments of all the faces detected. Yes. You take a group photo or an image at crowded public locations such as airports and department stores, and Amazon Rekognition will tell you what emotions the detected faces are displaying. Too good to be true! On the large picture, Image Rekognition gives AWS a new shot in their repertoire. As more and more image content move into the internet, systems like Rekognition can help keep the customers glued to the cloud platforms, engaging businesses for longer periods of time. This is why Rekognition can further boost Amazon’s cloud business. To get started with Text in Image, Face Search and Face Detection, you can download the latest SDK or simply log in to the Amazon Rekognition Console. For any further information, refer to the Amazon Rekognition documentation.
Read more
  • 0
  • 0
  • 15484

article-image-android-9-pies-smart-linkify-how-androids-new-machine-learning-based-feature-works
Natasha Mathur
13 Aug 2018
4 min read
Save for later

Android 9 pie’s Smart Linkify: How Android’s new machine learning based feature works

Natasha Mathur
13 Aug 2018
4 min read
Last week, Google launched Android 9 pie, the latest machine learning based Android operating system after Android Oreo. One of the features in Android 9 pie, named, smart linkify, a new version of the existing Android Linkify API adds clickable links on identifying entities such as dates, flights, addresses, etc, in content or text input via TextClassifier API. Smart linkify Smart linkify API is trained in TensorFlow which uses a small feedforward neural network.  This enables it to figure out whether or not a series of numbers or words is a phone number or address, just like Android Oreo’s Smart Text Selection feature. But, what’s different with this new feature is that instead of just making it easier to highlight and copy the associated text manually, it adds a relevant actionable link allowing users to immediately take action with a just a click. How does smart linkify work? Smart linkify follows three basic steps: Locating entities in an input text Processing the input text Training the network Let’s have a quick look at each of the above-mentioned steps. Finding entities in an input text The underlying process for detecting entities within texts is not an easy task. It poses many problems as people follow different ways to write addresses and phone numbers. There can also be confusion regarding the type of entity. For instance, “Confirmation number: 857-555-3556” can look like a phone number even though it’s not. So, to fix this problem, an inference algorithm with two small feedforward neural networks was designed by the Android team. The two feedforward neural networks look for context surrounding words and perform all kinds of entity chunking beyond just addresses and phone numbers. The first input text is split into words and then all the possible combination of entries, named “candidates”  are analyzed. After analyzing the candidates, a score is assigned on a scale of validity. Any overlapping candidates are removed, favoring the ones with the higher score. After this, the second neural network takes over and assigns a type of entity, as either a phone number, address or in some cases, a non-entity. Smart Linkify finding entities in a string of text Processing the input text After the entities have been located in the text, it’s time to process it. The neural networks determine whether the given entity candidate in the input text is valid or not. After knowing the context surrounding the entity, the network classifies it. With the help of machine learning, the input text is split into several parts and each is fed to the network separately. Smart linkify processing the input text Google uses character n-grams and a binary capitalization feature to “represent the individual words as real vectors suitable as an input of the neural network”. Character n-grams represent the word as a set of all character subsequences of a certain length. Google used lengths 1 to 5. The binary feature indicates whether the word starts with a capital letter. This is important as the capitalization in postal addresses is quite distinct, thereby, helping the networks to differentiate. Training the network Google has a training algorithm in place for datasets. It involves collecting lists of addresses, phone numbers and named entities (such as product, place, business names, etc). These are then used to synthesize the data for training neural networks. “We take the entities as they are and generate random textual contexts around them (from the list of random words on Web). Additionally, we add phrases like “Confirmation number:” or “ID:” to the negative training data for phone numbers, to teach the network to suppress phone number matches in these contexts”, says the Google team. There are a couple of other techniques that Google used for training the network such as: Quantizing the embedding matrix to 8-bit integers Sharing embedding matrices between the selection and classification networks. Varying the size of the context before/after the entities Creating artificial negative examples out of the positive ones for classification network. Currently, Smart Linkify offers support for 16 languages and plans to support more languages in the future. Google still relies on traditional techniques using standard regular expressions for flight numbers, date, times, IBAN, etc, but it plans to include ML models for these in the future. For more coverage on smart linkify, be sure to check out the official Google AI blog. All new Android apps on Google Play must target API Level 26 (Android Oreo) or higher Android P Beta 4 is here, stable Android P expected in the coming weeks! Is Google planning to replace Android with Project Fuchsia?  
Read more
  • 0
  • 0
  • 15470

article-image-postgis-3-0-0-releases-with-raster-support-as-a-separate-extension
Fatema Patrawala
24 Oct 2019
3 min read
Save for later

PostGIS 3.0.0 releases with raster support as a separate extension

Fatema Patrawala
24 Oct 2019
3 min read
Last week, the PostGIS development team released PostGIS 3.0.0. This release works with PostgreSQL 9.5-12 and GEOS >= 3.6. If developers are using postgis_sfcgal extension, they need to compile against SFCGAL 1.3.1 or higher. The major change in the PostGIS 3.0.0 version is the raster functionality which has been broken out as a separate extension. Take a look at other breaking changes in this release below: Breaking changes in PostGIS 3.0.0 Raster support now in a separate extension - postgis_raster Extension library files no longer include the minor version. If developers need the old behavior, they can use the new configure switch --with-library-minor-version. This change is intended to smoothen future pg_upgrade since lib file names will not change between version 3.0, 3.1, 3.* releases. ND box operators (overlaps, contains, within, equals) will not look at dimensions that aren’t present in both operands. Developers will need to REINDEX their ND indexes after upgrade. Includes 32-bit hash fix (requires reindexing hash(geometry) indexes) Sorting now uses Hilbert curve and Postgres Abbreviated Compare. New features in PostGIS 3.0.0 PostGIS used to expose a SQL function named geosnoop(geometry) to test the cost of deserializing and re-serializing from the PostgreSQL backend. In this release they have brought that function back named as postgis_geos_noop(geometry) with the SFCGAL counterpart. Added ST_AsMVT support for Feature ID. ST_AsMVT transforms a geometry into the coordinate space of a Mapbox Vector Tile of a set of rows corresponding to a Layer. It makes best effort to keep and even correct validity and might collapse geometry into a lower dimension in the process. Added SP-GiST and GiST support for ND box operators overlaps, contains, within, equals.  SP-Gist in PostGIS has been designed to support K Dimensional-trees and other spatial partitioning indexes. Added ST_3DLineInterpolatePoint. ST_Line_Interpolate_Point returns a point interpolated along a line. Introduced WAGYU to validate MVT polygons. Wagyu can be chosen at configure time to clip and validate MVT polygons. This library is faster and produces more correct results than the GEOS default, but it might drop small polygons. It will require a C++11 compiler and will use CXXFLAGS (not CFLAGS). With PostGIS 3.0, it is now possible to generate GeoJSON features directly without any intermediate code, using the new ST_AsGeoJSON(record) function. The GeoJSON format is a common transport format, between servers and web clients, and even between components of processing chains. Added ST_ConstrainedDelaunayTriangles SFCGAL function. This function returns a Constrained Delaunay triangulation around the vertices of the input geometry. This method needs SFCGAL backend, supports 3d media file and will not drop the z-index. Additionally the team has done other enhancements in this release. To know more about this news, you can check out the official blog post by the PostGIS team. PostgreSQL 12 Beta 1 released Writing PostGIS functions in Python language [Tutorial] Top 7 libraries for geospatial analysis Percona announces Percona Distribution for PostgreSQL to support open source databases  After PostgreSQL, DigitalOcean now adds MySQL and Redis to its managed databases’ offering
Read more
  • 0
  • 0
  • 15445
article-image-anthony-levandowski-announces-pronto-ai-and-makes-a-coast-to-coast-self-driving-trip
Sugandha Lahoti
19 Dec 2018
2 min read
Save for later

Anthony Levandowski announces Pronto AI and makes a coast-to-coast self-driving trip

Sugandha Lahoti
19 Dec 2018
2 min read
Anthony Levandowski is back in the self-driving space with a new company. Pronto AI. This Tuesday, he announced on a blog post on Medium that he has completed a trip across the country in a self-driving car without any human intervention. He is also developing a $5,000 aftermarket driver assistance system for semi-trucks, which will handle the steering, throttle, and brakes on the highway. https://twitter.com/meharris/status/1075036576143466497 Previously, Levandowski has been at the center of a controversy between Alphabet’s self-driving car company Waymo and Uber. Levandowski had allegedly taken with him confidential documents over which the companies got into a legal battle. He was briefly barred from the autonomous driving industry during the trial. However, the companies settled the case early this year. After laying low for a while, he is back with Pronto AI and it’s first ADAS ( advanced driver assistance system). “I know what some of you might be thinking: ‘He’s back?’” Levandowski wrote in his Medium post announcing Pronto’s launch. “Yes, I’m back.” Levandowski told the Guardian that he traveled in a self-driving vehicle from San Francisco to New York without human intervention. He didn't touch the steering wheel or pedals — except for periodic rest stops — for the full 3,099 miles. He posted a video that shows a portion of the drive, though it's hard to fact-check the full journey. The car was a modified Toyota Prius which used only video cameras, computers, and basic digital maps to make the cross-country trip. In the medium blog post, he also announced the development of a new camera-based ADAS. Named Copilot by Pronto, it delivers advanced features, built specifically for Class 8 vehicles, with driver comfort and safety top of mind. It will also offer lane keeping, cruise control and collision avoidance for commercial semi-trucks and will be rolled out in early 2019. Alphabet’s Waymo to launch the world’s first commercial self-driving cars next month Apex.AI announced Apex.OS and Apex.Autonomy for building failure-free autonomous vehicles Uber manager warned the leadership team of the inadequacy of safety procedures in their prototype robo-taxis early March, reports The Information
Read more
  • 0
  • 0
  • 15444

article-image-satya-nadella-microsofts-progress-data-ai-business-applications-trust-privacy
Sugandha Lahoti
17 Oct 2018
5 min read
Save for later

Satya Nadella reflects on Microsoft's progress in areas of data, AI, business applications, trust, privacy and more.

Sugandha Lahoti
17 Oct 2018
5 min read
Microsoft CEO, Satya Nadella published his letter to shareholders in the company’s 2018 annual report, on LinkedIn yesterday. He talks about Microsoft’s accomplishments in the past year, results and progress of Microsoft’s workplace, business applications, infrastructure, data, AI, and gaming. He also mentioned the data and privacy rules adopted by Microsoft, and their belief to, “ instill trust in technology across everything they do.” Microsoft’s result and progress Data and AI Azure Cosmos DB has already exceeded $100 million in annualized revenue. The company also saw rapid customer adoption of Azure Databricks for data preparation, advanced analytics, and machine learning scenarios. Their Azure Bot Service has nearly 300,000 developers, and they are on the road for building the world’s first AI supercomputer in Azure. Microsoft also acquired GitHub to recognize the increasingly vital role developers will play in value creation and growth across every industry. Business Applications Microsoft’s investments in Power BI have made them the leader in business analytics in the cloud. Their Open Data Initiative with Adobe and SAP will help customers to take control of their data and build new experiences that truly put people at the center. HoloLens and mixed reality will be used for designing for first-line workers, who account for 80 percent of the world’s workforce. New solutions powered by LinkedIn and Microsoft Graphs help companies manage talent, training, and sales and marketing. Applications and Infrastructure Azure revenue grew 91 percent year-over-year and the company is investing aggressively to build Azure as the world’s computer. They added nearly 500 new Azure capabilities in the past year, focused on both existing workloads and new workloads such as IoT and Edge AI. Microsoft expanded their global data center footprint to 54 regions. They introduced Azure IoT and Azure Stack and Azure Sphere. Modern Workplace More than 135 million people use Office 365 commercial every month. Outlook Mobile is also employed on 100 million iOS and Android devices worldwide. Microsoft Teams is being used by more than 300,000 organizations of all sizes, including 87 of the Fortune 100. Windows 10 is active on nearly 700 million devices around the world. Gaming The company surpassed $10 billion in revenue this year for gaming. Xbox Live now has 57 million monthly active users, and they are investing in new services like Mixer and Game Pass. They also added five new gaming studios this year including PlayFab to build a cloud platform for the gaming industry across mobile, PC and console. Microsoft’s impact around the globe Nadella highlighted that companies such as Coca-Cola, Chevron Corporation, ZF Group, a car parts manufacturer in Germany are using Microsoft’s technology to build their own digital capabilities. Walmart is also using Azure and Microsoft 365 for transforming the shopping experience for customers. In Kenya, M-KOPA Solar, one of their partners connected homes across sub-Saharan Africa to solar power using the Microsoft Cloud. Office Dynamics 365 was used in Arizona to improve outcomes among the state’s 15,000 children in foster care. MedApp is using HoloLens in Poland to help cardiologists visualize a patient's heart as it beats in real time. In Cambodia, underserved children in rural communities are learning to code with Minecraft. How Microsoft is handling trust and responsibility Microsoft motto is “instilling trust in technology across everything they do.” Nadella says, “We believe that privacy is a fundamental human right, which is why compliance is deeply embedded in all our processes and practices.” Microsoft has extended the data subject rights of GDPR to all their customers around the world, not just those in the European Union, and advocated for the passage of the CLOUD Act in the U.S. They also led the Cybersecurity Tech Accord, which has been signed by 61 global organizations, and are calling on governments to do more to make the internet safe. They announced the Defending Democracy Program to work with governments around the world to help safeguard voting and introduced AccountGuard to offer advanced cybersecurity protections to political campaigns in the U.S. The company is also investing in tools for detecting and addressing bias in AI systems and advocating government regulation. They are also addressing society's most pressing challenges with new programs like AI for Earth, a five-year, $50M commitment to environmental sustainability, and AI for Accessibility to benefit people with disabilities. Nadella further adds, “Over the past year, we have made progress in building a diverse and inclusive culture where everyone can do their best work.” Microsoft has nearly doubled the number of women corporate vice presidents at Microsoft since FY16.  They have also increased African American/Black and Hispanic/Latino representation by 33 percent. He concludes saying that “I’m proud of our progress, and I’m proud of the more than 100,000 Microsoft employees around the world who are focused on our customers’ success in this new era.” Read the full letter on Linkedin. Paul Allen, Microsoft co-founder, philanthropist, and developer dies of cancer at 65. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members
Read more
  • 0
  • 0
  • 15402

article-image-machine-learning-experts-on-how-we-can-use-machine-learning-to-mitigate-and-adapt-to-the-changing-climate
Bhagyashree R
18 Jun 2019
5 min read
Save for later

Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate

Bhagyashree R
18 Jun 2019
5 min read
Last week, a team of machine learning experts published a paper titled “Tackling Climate Change with Machine Learning”. The paper highlights how machine learning can be used to reduce greenhouse gas emissions and help society adapt to a changing climate. https://twitter.com/hardmaru/status/1139340463486320640 Climate change and its consequences are becoming more apparent to us day by day. And, one of the most significant ones is global warming, which is mainly caused by the emission of greenhouse gases. The report suggests that we can mitigate this problem by making changes to the existing electricity systems, transportation, buildings, industry, and land use. For adapting to the changing climate we need climate modeling, risk prediction, and planning for resilience and disaster management. This 54-page report lists various steps involving machine learning that can help us adapt and mitigate the problem of greenhouse gas emissions. In this article, we look at how machine learning and deep learning can be used for reducing greenhouse gas emissions from electricity systems: Electricity systems A quarter of human-caused greenhouse gas emissions come from electricity systems. To minimize this we need to switch to low-carbon electricity sources. Additionally, we should also take steps to reduce emissions from existing carbon-emitting power plants. There are two types of low-carbon electricity sources: variable and controllable: Variable sources Variable sources are those that fluctuate based on external factors, for instance, the energy produced by solar panels depend on the sunlight. Power generation and demand forecasting Though ML and deep learning methods have been applied to power generation and demand forecasting previously, it was done using domain-agnostic techniques. For instance, using clustering techniques on households or game theory, optimization, regression, or online learning to predict disaggregated quantities from aggregate electricity signals. This study suggests that future ML algorithms should incorporate domain-specific insights. They should be created using the innovations in climate modeling and weather forecasting and in hybrid-plus-ML modeling techniques. These techniques will help in improving both short and medium-term forecasts. ML models can be used to directly optimize for system goals. Improving scheduling and flexible demand ML can play an important role in improving the existing centralized process of scheduling and dispatching by speeding up power system optimization problems. It can be used to fit fast function approximators to existing optimization problems or provide good starting points for optimization. Dynamic scheduling and safe reinforcement learning can also be used to balance the electric grid in real time to accommodate variable generation or demand. ML or other simpler techniques can enable flexible demand by making storage and smart devices automatically respond to electricity prices. To provide appropriate signals for flexible demand, system operators can design electricity prices based on, for example, forecasts of variable electricity or grid emissions. Accelerated science for materials Many scientists are working to introduce new materials that are capable of storing or harnessing energy from variable natural resources more efficiently. For instance, solar fuels are synthetic fuels produced from sunlight or solar heat. It can capture solar energy when the sun is up and then store this energy for later use. However, coming up with new materials can prove to be very slow and imprecise. There are times when human experts do not understand the physics behind these materials and have to manually apply heuristics to understand a proposed material’s physical properties. ML techniques can prove to be helpful in such cases. They can be used to automate this process by combining “heuristics with experimental data, physics, and reasoning to apply and even extend existing physical knowledge.” Controllable sources Controllable sources can be turned on and off, for instance, nuclear or geothermal plants. Nuclear power plants Nuclear power plants are very important to meet climate change goals. However, they do pose some really significant challenges including public safety, waste disposal, slow technological learning, and high costs. ML, specifically deep networks can be used to reduce maintenance costs. They can speed up inspections by detecting cracks and anomalies from image and video data or by preemptively detecting faults from high-dimensional sensor and simulation data. Nuclear fusion reactors Nuclear fusion reactors are capable of producing safe and carbon-free electricity with the help of virtually limitless hydrogen fuel supply. But, right now they consume more energy that they produce. A lot of scientific and engineering research is still needed to be done before we can use nuclear fusion reactors to facilitate users. ML can be used to accelerate this research by guiding experimental design and monitoring physical processes. As nuclear fusion reactors have a large number of tunable parameters, ML can help prioritize which parameter configurations should be explored during physical experiments. Reducing the current electricity system climate impacts Reducing life-cycle fossil fuel emissions While we work towards bringing low-carbon electricity systems to society, it is important to reduce emissions from the current fossil fuel power generation. ML can be used to prevent the leakage of methane from natural gas pipelines and compressor stations. Earlier, people have used sensor and satellite data to proactively suggest pipeline maintenance or detect existing leaks. ML can be used to improve and scale the existing solutions. Reducing system waste As electricity is supplied to the consumers, some of it gets lost as resistive heat on electricity lines. While we cannot eliminate these losses completely, it can be significantly mitigated to reduce waste and emissions. ML can help prevent avoidable losses through predictive maintenance by suggesting proactive electricity grid upgrades. To know more in detail about how machine learning can help reduce the impact of climate change, check out the report. Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption? Now there’s a CycleGAN to visualize the effects of climate change. But is this enough to mobilize action? ICLR 2019 Highlights: Algorithmic fairness, AI for social good, climate change, protein structures, GAN magic, adversarial ML and much more
Read more
  • 0
  • 0
  • 15393
article-image-google-finally-ends-forced-arbitration-for-all-its-employees
Natasha Mathur
22 Feb 2019
4 min read
Save for later

Google finally ends Forced arbitration for all its employees

Natasha Mathur
22 Feb 2019
4 min read
Google announced yesterday that it is ending forced arbitration for its full-time employees as well as for the Temps, Vendors, and Contractors (TVCs) for cases of harassment, discrimination or wrongful termination. The changes will go into effect starting March 21 and employees will be able to litigate their past claims. Moreover, Google has also lifted the ban on class action lawsuits for the employees, reports WIRED. https://twitter.com/GoogleWalkout/status/1098692468432867328 In the case of contractors, Google has removed forced arbitration from the contracts of those who work directly with the firm. But, outside firms employing contractors are not required to follow the same. Google, however, will notify other firms and ask them to consider the approach and see if it works for them. Although this is very good news, the group called ‘Googlers for ending forced arbitration’ published a post on Medium stating that the “fight is not over”. They have planned a meeting with legislators in Washington D.C. for the next week, where six members of the group will advocate for an end to forced arbitration for all workers. “We will stand with Senators and House Representatives to introduce multiple bills that end the practice of forced arbitration across all employers. We’re calling on Congress to make this a law to protect everyone”, states the group. https://twitter.com/endforcedarb/status/1098697243517960194 It was back in November when 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment encountered at Google’s workplace. Google had waived forced arbitration for sexual harassment and assault claims, as a response to Google walkout (a move that was soon followed by Facebook), but employees were not convinced. Also, the forced arbitration policy was still applicable for contractors, temps, vendors, and was still in effect for other forms of discrimination within the firm. This was soon followed by Google contractors writing an open letter on Medium to Sundar Pichai, CEO, Google, in December, demanding him to address their demands of better conditions and equal benefits for contractors. Also, Googlers launched an industry-wide awareness campaign to fight against forced arbitration last month, where they shared information about arbitration on their Twitter and Instagram accounts throughout the day.  The employees mentioned in a post on Medium that there were “no meaningful gains for worker equity … nor any actual change in employee contracts or future offer letters”. The pressure on Google regarding more transparency around its sexual assault policies had been building up for quite a while. For instance, two shareholders, James Martin, and two other pension funds sued Alphabet’s board members, last month, for protecting the top execs accused of sexual harassment. The lawsuit urged for more clarity surrounding Google’s policies. Similarly, Liz Fong Jones, developer advocate at Google Cloud platform, revealed earlier last month, that she was leaving Google due to its lack of leadership in case of the demands made by employees during the Google walkout. Jones also published a post on Medium, last week, where she talked about the ‘grave concerns’ she had related to the strategic decisions made at Google.   “We commend the company in taking this step so that all its workers can access their civil rights through public court. We will officially celebrate when we see these changes reflected in our policy websites and/or employment agreements”, states the end forced arbitration group. Public reaction to the news is largely positive, with people cheering on Google employees for the victory: https://twitter.com/VidaVakil/status/1098773099531493376 https://twitter.com/jas_mint/status/1098723571948347392 https://twitter.com/teamcoworker/status/1098697515858182144 https://twitter.com/PipelineParity/status/1098721912111464450 Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus
Read more
  • 0
  • 0
  • 15392

article-image-jupyterlab-set-phase-jupyter-notebooks
Savia Lobo
28 Feb 2018
3 min read
Save for later

Is JupyterLab all set to phase out Jupyter Notebooks?

Savia Lobo
28 Feb 2018
3 min read
To keep up with Project Jupyter’s motto of developing open-source software, open-standards, and services with a goal to offer interactive computing across various programming languages they released JupyterLab beta readily available for users this month. JupyterLab is tagged as the next generation UI for Project Jupyter, and is a successor to Jupyter Notebooks, a successful and a widely adopted application launched by Project Jupyter last year. Saying hello to JupyterLab Jupyter Notebook is an open-source web application that allows users to create and share documentations that contains live code, visualizations, narrative text, and equations. Jupyter notebooks are used for tasks such as data cleaning, data transformation, numerical simulation, machine learning, and many more. It is now well established that the data science community  loves using Jupyter Notebooks for interactive computing. However, there are certain barriers they face which made their interaction with Jupyter Notebook a little less than ideal. Some of the cons include: Transition from different building blocks within a workflow is difficult Real-time collaboration of notebooks onto Dropbox or Google Drive is not possible with Jupyter Notebooks. Too many wasted spaces on the right and left of the Jupyter notebook These are some of the issues with Jupyter Notebooks, which are taken care of in the brand new JupyterLab. A swift move to JupyterLab JupyterLab has complete support for Jupyter Notebooks. So, one won’t miss working with notebooks but can do a lot more using JupyterLab. JupyterLab is an interactive environment which allows you to work with notebooks, code, and data, all under one roof. The most important feature of JupyterLab is real-time collaboration with several people on a single project. An add-on to this is its user-friendly interface, which makes it all the more easy-to-use. JupyterLab also shows a high level of integration between notebooks. This means, you can drag-and-drop notebooks cells and can also copy them between notebooks. You can also run code blocks from text files with .py, .R, .tex extensions. JupyterLab can also multi-task, i.e. you can open up notebooks, text editors, terminals, and other components, view them and edit them in different tabs simultaneously. JupyterLab offers an entire range of extensions which could be used to enhance parts of JupyterLab. One can choose from a variety of themes, editors, and renderers for rich outputs on notebooks. JupyterLab extensions are npm packages (the standard package format in Javascript development). There are also many community-developed extensions being built on GitHub. To find extensions, you can search GitHub for jupyterlab-extension. You can also check out the developer documentation guide for information on developing extensions. Some additional features of JupyterLab include: JupyterLab is more about development unlike Jupyter Notebook which focuses on presentation. Developers can perform syntax completion using the Tab key and object tool-tip inspection, using the Shift-Tab keys. Files can be opened up in variety of formats. Also, developers can run their codes interactively inside of 'consoles' and not only notebooks. This promotes an imperative programming mode for them. JupyterLab accommodates notebooks in multiple languages, provided the kernels for those languages are installed. Browsers such as Chrome, Firefox, and Safari are compatible with JupyterLab. The Jupyter community plans to unleash version 1.0 of JupyterLab some time later this year. The version 1.0 will replace the classic Jupyter Notebook. However, the notebook document format would be supported by both classic notebook as well as JupyterLab. For a further detailed information on JupyterLab beta, visit Project Jupyter’s official blogpost                                                      
Read more
  • 0
  • 0
  • 15341
Modal Close icon
Modal Close icon