Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-introducing-jupytext-jupyter-notebooks-as-markdown-documents-julia-python-or-r-scripts
Natasha Mathur
11 Sep 2018
2 min read
Save for later

Introducing Jupytext: Jupyter notebooks as Markdown documents, Julia, Python or R scripts

Natasha Mathur
11 Sep 2018
2 min read
Project Jupyter released Jupytext, last week, a new project which allows you to convert Jupyter notebooks to and from Julia, Python or R scripts (extensions .jl, .py and .R), markdown documents (extension .md), or R Markdown documents (extension .Rmd). It comes with features such as writing notebooks as plain text, paired notebooks, command line conversion, and round-trip conversion. It is available from within Jupyter. It allows you to work as you would usually do on your notebook in Jupyter, and save and read it in the formats you select. Let’s have a look at its major features.  Writing notebooks as plain text Jupytext allows plain scripts that you can draft and test in your favorite IDE and open naturally as notebooks in Jupyter. You can run the notebook in Jupyter for generating output, associating a .ipynb representation, along with saving and sharing your research. Paired Notebooks Paired notebooks let you store a .ipynb file alongside the text-only version. Paired notebooks can be enabled by adding a jupytext_formats entry to the notebook metadata with Edit/Edit Notebook Metadata in Jupyter's menu. On saving the notebook, both the Jupyter notebook and the python scripts are updated. Command line conversion There’s a jupytext script present for command line conversion between the various notebook extensions: jupytext notebook.ipynb --to md --test      (Test round-trip conversion) jupytext notebook.ipynb --to md --output      (display the markdown version on screen) jupytext notebook.ipynb --to markdown           (create a notebook.md file) jupytext notebook.ipynb --to python               (create a notebook.py file) jupytext notebook.md --to notebook              (overwrite notebook.ipynb) (remove outputs) Round-trip conversion Round-trip conversion is also possible with Jupytext. Converting Script to Jupyter notebook to script again is identity, meaning that on associating a Jupyter kernel with your notebook, the information will go to a yaml header at the top of your script. Converting Markdown to Jupyter notebook to Markdown is again identity. Converting Jupyter to script, then back to Jupyter preserves source and metadata. Similarly, converting Jupyter to Markdown, and Jupyter again preserves source and metadata (cell metadata available only for R Markdown). For more information on, check out the official release notes. 10 reasons why data scientists love Jupyter notebooks Is JupyterLab all set to phase out Jupyter Notebooks? How everyone at Netflix uses Jupyter notebooks from data scientists, machine learning engineers, to data analysts
Read more
  • 0
  • 0
  • 23692

article-image-introducing-voila-that-turns-your-jupyter-notebooks-to-standalone-web-applications
Bhagyashree R
13 Jun 2019
3 min read
Save for later

Introducing Voila that turns your Jupyter notebooks to standalone web applications

Bhagyashree R
13 Jun 2019
3 min read
Last week, a Jupyter Community Workshop on dashboarding was held in Paris. At the workshop, several contributors came together to build the Voila package, the details of which QuantStack shared yesterday. Voila serves live Jupyter notebooks as standalone web applications providing a neat way to share your work results with colleagues. Why do we need Voila? Jupyter notebooks allow you to do something called “literature programming” in which human-friendly explanations are accompanied with code blocks. It allows scientists, researchers, and other practitioners of scientific computing to add theory behind their code including mathematical equations. However, Jupyter notebooks may prove to be a little bit problematic when you plan to communicate your results with other non-technical stakeholders. They might get put-off by the code blocks and also the need for running the notebook to see the results. It also does not have any mechanism to prevent arbitrary code execution by the end user. How Voila works? Voila addresses all the aforementioned queries by converting your Jupyter notebook to a standalone web application. After connecting to a notebook URL, Voila launches the kernel for that notebook and runs all the cells. Once the execution is complete, it does not shut down the kernel. The notebook gets converted to HTML and is served to the user. This rendered HTML includes JavaScript that is responsible for initiating a websocket connection with the Jupyter kernel. Here’s a diagram depicting how it works: Source: Jupyter Blog Following are the features Voila provides: Renders Jupyter interactive widgets: It supports Jupyter widget libraries including bqplot, ipyleafet, ipyvolume, ipympl, ipysheet, plotly, and ipywebrtc. Prevents arbitrary code execution: It does not allow arbitrary code execution by consumers of dashboards. A language-agnostic dashboarding system: Voila is built upon Jupyter standard protocols and file formats enabling it to work with any Jupyter kernel (C++, Python, Julia). Includes custom template system for better extensibility: It provides a flexible template system to produce rich application layouts. Many Twitter users applauded this new way of creating live and interactive dashboards from Jupyter notebooks: https://twitter.com/philsheard/status/1138745404772818944 https://twitter.com/andfanilo/status/1138835776828071936 https://twitter.com/ToluwaniJohnson/status/1138866411261124608 Some users also compared it with another dashboarding solution called Panel. The main difference between Panel and Voila is that Panel supports Bokeh widgets whereas Voila is framework and language agnostic. “Panel can use a Bokeh server but does not require it; it is equally happy communicating over Bokeh Server's or Jupyter's communication channels. Panel doesn't currently support using ipywidgets, nor does Voila currently support Bokeh plots or widgets, but the maintainers of both Panel and Voila have recently worked out mechanisms for using Panel or Bokeh objects in ipywidgets or using ipywidgets in Panels, which should be ready soon,” a Hacker News user commented. To read more in detail about Voila, check out the official announcement on the Jupyter Blog. JupyterHub 1.0 releases with named servers, support for TLS encryption and more Introducing Jupytext: Jupyter notebooks as Markdown documents, Julia, Python or R scripts JupyterLab v0.32.0 releases
Read more
  • 0
  • 0
  • 23515

article-image-pytorch-1-2-is-here-with-a-new-torchscript-api-expanded-onnx-export-and-more
Bhagyashree R
12 Aug 2019
3 min read
Save for later

PyTorch 1.2 is here with a new TorchScript API, expanded ONNX export, and more

Bhagyashree R
12 Aug 2019
3 min read
Last week, the PyTorch team announced the release of PyTorch 1.2. This version comes with a new TorchScript API with improved Python language coverage, expanded ONNX export, a standard nn.Transformer module, and more. https://twitter.com/PyTorch/status/1159552940257923072 Here are some of the updates in PyTorch 1.2: A new TorchScript API TorchScript enables you to create models that are serializable and optimizable with PyTorch code. PyTorch 1.2 brings a new “easier-to-use TorchScript API” for converting nn.Modules into ScriptModules. The torch.jit.script will now recursively compile functions, methods, and classes that it encounters. The preferred way to create ScriptModules is torch.jit.script(nn_module_instance) instead of inheriting from torch.jit.ScriptModule. With this update, some of the items will be considered deprecated and developers are recommended not to use them in their new code. Among the deprecated components are the @torch.jit.script_method decorator, classes that inherit from torch.jit.ScriptModule, the torch.jit.Attribute wrapper class, and the __constants__ array. Also, TorchScript now has improved support for Python language constructs and Python's standard library. It supports iterator-based constructs such as for..in loops, zip(), and enumerate(). It also supports the math and string libraries and other Python builtin functions. Full support for ONNX Opset export The PyTorch team has worked with Microsoft to bring full support for exporting ONNX Opset versions 7, 8, 9, 10. PyTorch 1.2 includes the ability to export dropout, slice, flip and interpolate in Opset 10. ScriptModule is improved to include support for multiple outputs, tensor factories, and tuples as inputs and outputs. Developers will also be able to register their own symbolic to export custom ops, and set the dynamic dimensions of inputs during export. A standard nn.Transformer PyTorch 1.2 comes with a standard nn.Transformer module that allows you to modify the attributes as needed. Based on the paper Attention is All You Need, this module relies entirely on an attention mechanism for drawing global dependencies between input and output. It is designed in such a way that you can use its individual components independently. For instance, you can use its nn.TransformerEncoder API without the larger nn.Transformer. Breaking changes in PyTorch 1.2 The return dtype of comparison operations including lt, le, gt, ge, eq, ne is now changed to torch.bool instead of torch.uint8. The type of torch.tensor(bool) and torch.as_tensor(bool) is changed to torch.bool dtype instead of torch.uint8. Some of the linear algebra functions are now removed in favor of the renamed operations. Here’s a table listing all the removed operations and their alternatives for your quick reference: Source: PyTorch Check out the PyTorch release notes to know more in detail. PyTorch announces the availability of PyTorch Hub for improving machine learning research reproducibility Sherin Thomas explains how to build a pipeline in PyTorch for deep learning workflows Facebook open-sources PyText, a PyTorch based NLP modeling framework  
Read more
  • 0
  • 0
  • 23419

article-image-tesseract-version-4-0-releases-with-new-lstm-based-engine-and-an-updated-build-system
Natasha Mathur
30 Oct 2018
2 min read
Save for later

Tesseract version 4.0 releases with new LSTM based engine, and an updated build system

Natasha Mathur
30 Oct 2018
2 min read
Google released version 4.0 of its OCR engine, Tesseract, yesterday. Tesseract 4.0 comes with a new neural net (LSTM) based OCR engine, updated build system, other improvements, and bug fixes. Tesseract is an OCR engine that offers support for unicode (a specification that supports all character set) and comes with an ability to recognize more than 100 languages out of the box. It can be trained to recognize other languages and is used for text detection on mobile devices, videos, and in Gmail image spam detection. Let’s have a look at what's new in Tesseract 4.0. New neural net (LSTM) based OCR engine The new OCR engine uses a neural network system based on LSTMs, with major accuracy gains. This consists of new training tools for the LSTM OCR engine. You can train a new model from scratch or by fine-tuning an existing model. Trained data including LSTM models and 123 languages have been added to the new OCR engine. Optional accelerated code paths have been added for the LSTM recognizer: Moreover, a new parameter lstm_choice_mode that allows including alternative symbol choices in the hOCR output has been added. Updated Build System Tesseract 4.0 uses semantic versioning and requires Leptonica 1.74.0 or a higher version. In case you want to build Tesseract from source code then a compiler with strong C++ 11 support is necessary. Unit tests have been added to the main repo. Tesseract's source tree has been reorganized in version 4.0. A new option has been added that lets you compile Tesseract without the code of the legacy OCR engine. Bug Fixes Issues in trainingdata rendering have been fixed. Damage caused to binary images when processing PDFs has been fixed. Issues in the OpenCL code have been fixed. OpenCL now works fine for the legacy Tesseract OCR engine but the performance hasn’t improved yet. Other Improvements Multi-page TIFF handling is improved in Tesseract 4.0. Improvements are made to PDF rendering. The version information and improved help texts have been added to the training tools. tessedit_pageseg_mode 1 has been removed from hocr, pdf, and tsv config files. The user has to now explicitly use --psm 1 if that is desired. For more information, check out the official release notes. Tesla v9 to incorporate neural networks for autopilot Neural Network Intelligence: Microsoft’s open source automated machine learning toolkit
Read more
  • 0
  • 0
  • 23182

article-image-deepmind-ais-alphastar-achieves-grandmaster-level-in-starcraft-ii-with-99-8-efficiency
Vincy Davis
04 Nov 2019
5 min read
Save for later

DeepMind AI’s AlphaStar achieves Grandmaster level in StarCraft II with 99.8% efficiency

Vincy Davis
04 Nov 2019
5 min read
Earlier this year in January, Google’s DeepMind AI AlphaStar had defeated two professional players, TLO and MaNa, at StarCraft II, a real-time strategy game. Two days ago, DeepMind announced that AlphaStar has now achieved the highest possible online competitive ranking, called Grandmaster level, in StarCraft II. This makes AlphaStar the first AI to reach the top league of a widely popular game without any restrictions. AplhaStar used the multi-agent reinforcement learning technique and rated above 99.8% of officially ranked human players. It was able to achieve the Grandmaster level for all the three StarCraft II races - Protoss, Terran, and Zerg. The DeepMind researchers have published the details of AlphaStar in the paper titled, ‘Grandmaster level in StarCraft II using multi-agent reinforcement learning’. https://twitter.com/DeepMindAI/status/1189617587916689408 How did AlphaStar achieve the Grandmaster level in StarCraft II? The DeepMind researchers were able to develop a robust and flexible agent by understanding the potential and limitations of open-ended learning. This helped the researchers to make AlphaStar cope with complex real-world domains. “Games like StarCraft are an excellent training ground to advance these approaches, as players must use limited information to make dynamic and difficult decisions that have ramifications on multiple levels and timescales,” states the blog post. The StarCraft II video game requires players to balance high-level economic decisions with individual control of hundreds of units. When playing this game, humans are under physical constraints which limits their reaction time and their rate of actions. Accordingly, AphaStar was also imposed with these constraints, thus making it suffer from delays due to network latency and computation time. In order to limit its actions per minute (APM), AphaStar’s peak statistics were kept substantially lower than those of humans. To align with the standard human movement, it also had a limited viewing of the portion of the map, AlphaStar could register only a limited number of mouse clicks and had only 22 non-duplicated actions to play every five seconds. AlphaStar uses a combination of general-purpose techniques like neural network architectures, imitation learning, reinforcement learning, and multi-agent learning. The games were sampled from a publicly available dataset of anonymized human replays, which were later trained to predict the action of every player. These predictions were then used to procure a diverse set of strategies to reflect the different modes of human play. Read More: DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players Dario “TLO” WÜNSCH, a professional starcraft II player says, “I’ve found AlphaStar’s gameplay incredibly impressive – the system is very skilled at assessing its strategic position, and knows exactly when to engage or disengage with its opponent. And while AlphaStar has excellent and precise control, it doesn’t feel superhuman – certainly not on a level that a human couldn’t theoretically achieve. Overall, it feels very fair – like it is playing a ‘real’ game of StarCraft.” According to the paper, AlphaStar had the 1026 possible actions available at each time step, thus it had to make thousands of actions before learning if it has won or lost the game. One of the key strategies behind AlphaStar’s performance was learning human strategies. This was necessary to ensure that the agents keep exploring those strategies throughout self-play. The researchers say, “To do this, we used imitation learning – combined with advanced neural network architectures and techniques used for language modeling – to create an initial policy which played the game better than 84% of active players.” AlphaStar also uses a latent variable to encode the distribution of opening moves from human games. This helped AlphaStar to preserve the high-level strategies and enabled it to represent many strategies within a single neural network. By training the advances in imitation learning, reinforcement learning, and the League, the researchers were able to train AlphaStar Final, the agent that reached the Grandmaster level at the full game of StarCraft II without any modifications. AlphaStar used a camera interface, which helped it get the exact information that a human player would receive. All the interface and restrictions faced by AlphaStar were approved by a professional player. Finally, the results indicated that general-purpose learning techniques can be used to scale AI systems to work in complex and dynamic environments involving multiple actors. AlphaStar’s great feat has got many people excited about the future of AI. https://twitter.com/mickdooit/status/1189604170489315334 https://twitter.com/KaiLashArul/status/1190236180501139461 https://twitter.com/JoshuaSpanier/status/1190265236571459584 Interested readers can read the research paper to check AlphaStar’s performance. Head over to DeepMind’s blog for more details. Google AI introduces Snap, a microkernel approach to ‘Host Networking’ Are we entering the quantum computing era? Google’s Sycamore achieves ‘quantum supremacy’ while IBM refutes the claim Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs
Read more
  • 0
  • 0
  • 23051

article-image-alibabas-chipmaker-launches-open-source-risc-v-based-xuantie-910-processor-for-5g-ai-iot-and-self-driving-applications
Vincy Davis
26 Jul 2019
4 min read
Save for later

Alibaba’s chipmaker launches open source RISC-V based ‘XuanTie 910 processor’ for 5G, AI, IoT and self-driving applications

Vincy Davis
26 Jul 2019
4 min read
Launched in 2018, Alibaba’s chip subsidiary, Pingtouge made a major announcement yesterday. Pingtouge is launching its first product - chip processor XuanTie 910 using the open-source RISC-V instruction set architecture. The XuanTie 910 processor is expected to reduce the costs of related chip production by more than 50%, reports Caixin Global. XuanTie 910, also known as T-Head, will soon be available in the market for commercial use. Pingtouge will also be releasing some of XuanTie 910’s codes on Github for free to help the global developer community to create innovative applications. No release dates have been revealed yet. What are the properties of the XuanTie 910 processor? The XuanTie 910 16-core processor has 7.1 Coremark/MHz and its main frequency can achieve 2.5GHz. This processor can be used to manufacture high-end edge-based microcontrollers (MCUs), CPUs, and systems-on-chip (SOC). It can be used in applications like 5G telecommunication, artificial intelligence (AI), and autonomous driving. XuanTie 910 processor gives 40% increased performance over the mainstream RISC-V instructions and also a 20% increase in terms of instructions. According to Synced, Xuantie 910 has two unconventional properties: It has a 2-stage pipelined out-of-order triple issue processor with two memory accesses per cycle. The processors computing, storage and multi-core capabilities are superior due to an increased extension of instructions. Xuantie 910 can extend more than 50 instructions than RISC-V. Last month, The Verge reported that an internal ARM memo has instructed its staff to stop working with Huawei. With the US blacklisting China’s telecom giant Huawei, and also banning any American company from doing business with them, it seems that ARM is also following the American strategy. Although ARM is based in U.K. and is owned by the Japanese SoftBank group, it does have an “US origin technology”, as claimed in the internal memo. This may be one of the reasons why Alibaba is increasing its efforts in developing RISC-V, so that Chinese tech companies can become independent from Western technologies. A Xuantie 910 processor can assure Chinese companies of a stable future, with no fear of it being banned by Western governments. Other than being cost-effective, RISC-V also has other advantages like more flexibility compared to ARM. With complex licence policies and high power prospect, it is going to be a challenge for ARM to compete against RISC-V and MIPS (Microprocessor without Interlocked Pipeline Stages) processors. A Hacker News user comments, “I feel like we (USA) are forcing China on a path that will make them more competitive long term.” Another user says, “China is going to be key here. It's not just a normal market - China may see this as essential to its ability to develop its technology. It's Made in China 2025 policy. That's taken on new urgency as the west has started cutting China off from western tech - so it may be normal companies wanting some insurance in case intel / arm cut them off (trade disputes etc) AND the govt itself wanting to product its industrial base from cutoff during trade disputes” Some users also feel that it is technology that wins when two big economies continue bringing up innovative technologies. A comment on Hacker News reads, “Good to see development from any country. Obviously they have enough reason to do it. Just consider sanctions. They also have to protect their own market. Anyone that can afford it, should do it. Ultimately it is a good thing from technology perspective.” Not all US tech companies are wary of partnering with Chinese counterparts. Two days ago, Salesforce, an American cloud-based software company announced a strategic partnership with Alibaba. This aims to help Salesforce localize their products in mainland China, Hong Kong, Macau, and Taiwan. This will enable Salesforce customers to market, sell, and operate through services like Alibaba Cloud and Tmall. Winnti Malware: Chinese hacker group attacks major German corporations for years, German public media investigation reveals The US Justice Department opens a broad antitrust review case against tech giants Salesforce is buying Tableau in a $15.7 billion all-stock deal
Read more
  • 0
  • 0
  • 23048
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-netflix-uses-ava-an-image-discovery-tool-to-find-the-perfect-title-image-for-each-of-its-shows
Melisha Dsouza
04 Sep 2018
5 min read
Save for later

How Netflix uses AVA, an Image Discovery tool to find the perfect title image for each of its shows

Melisha Dsouza
04 Sep 2018
5 min read
Netflix, the video-on-demand streaming company, has seen a surge in its growing number of users every day as well as in the viewership of its TV shows. It is constantly striving to provide an enriching experience to its viewers. To keep pace with the ever-increasing demands of user experience, Netflix is introducing a collection of tools and algorithms to make its content more audience relevant. AVA( Aesthetic Visual Analysis)- analyses large volumes of images obtained from video frames of a particular TV show to set as the title image for that show. Netflix understands that a more visually appealing title image plays an incredibly important role assisting a viewer find new shows and movies to watch. How title images are selected normally Usually, content editors had to go through tens of thousands of video frames for a show, to select a good title image. To give you a gist of the effort required-  a single one-hour episode of ‘Stranger Things’, consists of nearly 86,000 static video frames. Imagine sieving through each one of these frames painstakingly to find the perfect title image that will not only connect with the viewers, but also give them a gist of the storyline. To top it all up, the number of frames can go up to a million depending on the number of episodes in a show. This task of manually screening the frames is almost impossible and labor intensive, if not ineffective. Additionally, the editors choosing the image stills require an in-depth expertise of the source content that they’re intended to represent. Considering Netflix has an exponentially increasing catalog of shows, this presents a very challenging expectation for the editors to surface meaningful images from videos. Enter AVA, using its image classification algorithms for sorting the right image at the right time. What is AVA? The ever-growing number of images on the internet space has led to challenges in its processing and classification. To address this concern, a research team from University of Barcelona, Spain in collaboration with Xerox corporation has developed a method called Aesthetic Visual Analysis (AVA) as a research project. The project contains a vast database of over 2.5 lakh images combined with metadata such as aesthetic scores for images semantic labels for more than 60 classifications of images and many other characteristics. Using statistical concepts like standard deviation, mean score and variance, AVA rates images. Based on the distributions computed from these statistics, they assess the semantic challenges and choose the right images for the database. AVA primarily alleviates the issues of extensive benchmarking and trains more images. They also enable images to get a better aesthetic appeal. Computing performance can be significantly optimised to have lesser impact on the hardware. You can get more insights by reading the Research paper. The ‘AVA’ approach used at Netflix The process takes place in 3 steps: AVA starts by analysing images obtained through the process of frame annotation. This includes processing and annotating many different variables on every individual frame of video to best derive what the frame contains, and to understand its importance to the story. To keep up pace with the growing catalog of content, Netflix uses the Archer framework to process videos more efficiently. Archer splits the video into very tiny bits to aid parallel video processing. After the frames are obtained, they are subjected to a series of image recognition algorithms to build metadata. Metadata is further classified as visual, contextual and composition metadata.  To give you a brief overview- Visual Metadata: For brightness, sharpness and color Contextual Metadata: This is a combination of elements that are combined to derive meaning from the actions or movement of the actors, objects and camera in the frame. Eg: face detection, Motion estimation, Object Detection and camera shot identification Composition Metadata: For intricate image details based on core principles in photography, cinematography and visual aesthetic design such as depth of field and symmetry. Choosing the right Picture! The ‘best’ image is chosen considering three important aspects– the lead actors, visual range and sensitivity filters. Emphasis is given first to lead actors of the show since they make a visual impact. In order to identify the key character for a given episode, AVA utilizes a combination of face clustering and actor recognition to filter main characters from secondary characters or extras. The next thing, is the diversity of the images present in the video frames which includes camera positions, image details such as brightness, color, contrast to name a few. Keeping these in mind, image frames are easy to group based on similarities. This helps in developing image support vectors. The vectors primarily assist in designing an image diversity index where all the relevant images collected for an episode or even a movie can be scored based on visual appeal. Sensitive factors such as violence, nudity and advertisements are filtered and are allotted low priority in the image vectors. This way they are screened out completely in the process. Source: Netflix Blog What's in this for Netflix and its users? Netflix’s decision to use AVA will not only save manual labour, but also reduce the cost involved in having manpower source through millions of images in order to get that one perfect shot. This unique approach will help in obtaining meaningful images from video and thus enable creative teams to invest time in designing stunning artwork . As for its users, a good title image means establishing a deeper connection to the show’s characters and storyline, thus improving their overall experience. To understand the intricate  workings of AVA, you can read Netflix engineering team’s original post on this topic on Medium. How everyone at Netflix uses Jupyter notebooks from data scientists, machine learning engineers, to data analysts Netflix releases FlameScope Netflix bring in Verna Myers as new VP of Inclusion strategy to boost cultural diversity
Read more
  • 0
  • 0
  • 22970

article-image-introducing-microsofts-airsim-an-open-source-simulator-for-autonomous-vehicles-built-on-unreal-engine
Bhagyashree R
19 Sep 2019
4 min read
Save for later

Introducing Microsoft’s AirSim, an open-source simulator for autonomous vehicles built on Unreal Engine

Bhagyashree R
19 Sep 2019
4 min read
Back in 2017, the Microsoft Research team developed and open-sourced Aerial Informatics and Robotics Simulation (AirSim). On Monday, the team shared how AirSim can be used to solve the current challenges in the development of autonomous systems. Microsoft AirSim and its features Microsoft AirSim is an open-source, cross-platform simulation platform for autonomous systems including autonomous cars, wheeled robotics, aerial drones, and even static IoT devices. It works as a plugin for Epic Games’ Unreal Engine. There is also an experimental release for the Unity game engine. Here is an example of drone simulation in AirSim: https://www.youtube.com/watch?v=-WfTr1-OBGQ&feature=youtu.be AirSim was built to address two main problems developers face during the development of autonomous systems. First, the requirement of large datasets for training and testing the systems and second, the ability to debug in a simulator. With AirSim, the team aims to equip developers with a platform that has various training experiences so that the autonomous systems could be exposed to different scenarios before they are deployed in the real-world. “Our goal is to develop AirSim as a platform for AI research to experiment with deep learning, computer vision and reinforcement learning algorithms for autonomous vehicles. For this purpose, AirSim also exposes APIs to retrieve data and control vehicles in a platform-independent way,” the team writes. AirSim provides physically and visually realistic simulations by supporting hardware-in-the-loop simulation with popular flight controllers such as PX4, an open-source autopilot system. It can be easily extended to accommodate various new types of autonomous vehicles, hardware platforms, and software protocols. Its extensible architecture also allows them to quickly add custom autonomous system models and new sensors to the simulator. AirSim for tackling the common challenges in the autonomous systems’ development In April, the Microsoft Research team collaborated with Carnegie Mellon University and Oregon State University, collectively called Team Explorer, to solve the DARPA Subterranean (SubT) Challenge. The challenge was to build robots that can autonomously map, navigate, and search underground environments during time-sensitive combat operations or disaster response scenarios. On Monday, Microsoft’s Senior Research Manager, Ashish Kapoor shared how they used AirSim to solve this challenge. Team Explorer and Microsoft used AirSim to create an “intricate maze” of man-made tunnels in a virtual world. To create this maze the team used reference material from real-world mines to modularly generate a network of interconnected tunnels. This was a high-definition simulation of man-made tunnels that also included robotic vehicles and a suite of sensors. AirSim also provided a rich platform that Team Explorer could use to test their methods along with generating training experiences for creating various decision-making components for autonomous agents. Microsoft believes that AirSim can also help accelerate the creation of a real dataset for underground environments. “Microsoft’s ability to create near-realistic autonomy pipelines in AirSim means that we can rapidly generate labeled training data for a subterranean environment,” Kapoor wrote. Kapoor also talked about another collaboration with Air Shepherd and USC to help counter wildlife poaching using AirSim. In this collaboration, they developed unmanned aerial vehicles (UAVs) equipped with thermal infrared cameras that can fly through national parks to search for poachers and animals. AirSim was used to create a simulation of this use case, in which virtual UAVs flew over virtual environments at an altitude from 200 to 400 feet above ground level. “The simulation took on the difficult task of detecting poachers and wildlife, both during the day and at night, and ultimately ended up increasing the precision in detection through imaging by 35.2%,” the post reads. These were some of the recent use cases where AirSim was used. To explore more and to contribute you can check out its GitHub repository. Other news in Data 4 important business intelligence considerations for the rest of 2019 How artificial intelligence and machine learning can help us tackle the climate change emergency France and Germany reaffirm blocking Facebook’s Libra cryptocurrency
Read more
  • 0
  • 0
  • 22938

article-image-pytorch-1-0-is-here-with-jit-c-api-and-new-distributed-packages
Natasha Mathur
10 Dec 2018
4 min read
Save for later

PyTorch 1.0 is here with JIT, C++ API, and new distributed packages

Natasha Mathur
10 Dec 2018
4 min read
It was just two months back when Facebook announced the release of PyTorch 1.0 RC1. Facebook is now out with the stable release of PyTorch 1.0. The latest release, which was announced last week at the NeurIPS conference, explores new features such as JIT, brand new distributed package, and Torch Hub, breaking changes, bug fixes and other improvements. PyTorch is an open source, deep learning python-based framework. “It accelerates the workflow involved in taking AI from research prototyping to production deployment, and makes it easier and more accessible to get started”, reads the announcement page. Let’s now have a look at what’s new in PyTorch 1.0 New Features JIT JIT is a set of compiler tools that is capable of bridging the gap between research in PyTorch and production. JIT enables the creation of models that have the capacity to run without any dependency on the Python interpreter. PyTorch 1.0 offers two ways using which you can make your existing code compatible with the JIT: using  torch.jit or torch.jit.script. Once the models have been annotated, Torch Script code can be optimized and serialized for later use in the new C++ API, which doesn't depend on Python. Brand New distributed package In PyTorch 1.0, the  new torch.distributed package and torch.nn.parallel.DistributedDataParallel comes backed with a brand new re-designed distributed library. Major highlights of the new library are as follows: The new torch.distributed is performance driven and operates entirely asynchronously for all backends such as Gloo, NCCL, and MPI. There are significant Distributed Data-Parallel performance improvements for hosts with slower networks such as Ethernet-based hosts. It comes with async support for all distributed collective operations in the torch.distributed package. C++ frontend [ API unstable] The C++ frontend is a complete C++ interface to the PyTorch backend. It follows the API and architecture of the established Python frontend and is meant to enable research in high performance, low latency and bare metal C++ applications. It also offers equivalents to torch.nn, torch.optim, torch.data and other components of the Python frontend. PyTorch team has released C++ frontend marked as "API Unstable" as part of PyTorch 1.0. This is because although it is ready to use for research applications, it still needs to get more stabilized over future releases. Torch Hub Torch Hub refers to a pre-trained model repository that has been designed to facilitate research reproducibility. Torch Hub offers support for publishing pre-trained models (model definitions and pre-trained weights) to a github repository with the help of hubconf.py file. Once published, users can then load the pre-trained models with the help of torch.hub.load API. Breaking Changes Indexing a 0-dimensional tensor displays an error instead of warn. torch.legacy has been removed. torch.masked_copy_ is removed and hence, use torch.masked_scatter_ instead. torch.distributed: the TCP backend has been removed. It is recommended to use Gloo and MPI backends for CPU collectives and NCCL backend for GPU collectives. torch.tensor function with a Tensor argument can now return a detached Tensor (i.e. a Tensor where grad_fn is None) in PyTorch 1.0. torch.nn.functional.multilabel_soft_margin_loss now returns Tensors of shape (N,) instead of (N, C). This is to match the behaviour of torch.nn.MultiMarginLoss and it is also more numerically stable. Support for C extensions has been removed in PyTorch 1.0. Torch.utils.trainer has been deprecated. Bug fixes torch.multiprocessing has been fixed and now correctly handles CUDA tensors, requires_grad settings, and hooks. Memory leak during packing in tuples has been fixed. RuntimeError: storages that don't support slicing when loading models are saved with PyTorch 0.3, has been fixed. The issue with calculated output sizes of torch.nn.Conv modules with stride and dilation have been fixed. torch.dist has been fixed for infinity, zero and minus infinity norms. torch.nn.InstanceNorm1d has been fixed and now can correctly accept 2-dimensional inputs. torch.nn.Module.load_state_dict showed an incorrect error message that has been fixed. broadcasting bug in torch.distributions.studentT.StudentT has been fixed. Other Changes “Advanced Indexing" performance has been considerably improved on CPU as well as GPU. torch.nn.PReLU speed has been improved on both CPU and GPU. Printing large tensors has become faster. N-dimensional empty tensors have been added in PyTorch 1.0, which allows tensors with 0 elements to have arbitrary number of dimensions. They also support indexing and other torch operations. For more information, check out the official release notes. Can a production-ready Pytorch 1.0 give TensorFlow a tough time? Pytorch.org revamps for Pytorch 1.0 with design changes and added Static graph support What is PyTorch and how does it work?
Read more
  • 0
  • 0
  • 22885

article-image-deepminds-alphafold-is-successful-in-predicting-the-3d-structure-of-a-protein-making-major-inroads-for-ai-use-in-healthcare
Sugandha Lahoti
04 Dec 2018
3 min read
Save for later

Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare

Sugandha Lahoti
04 Dec 2018
3 min read
Google’s DeepMind is turning its attention to using AI for science and healthcare. This statement is strengthened by the fact that last month, Google made major inroads into healthcare tech by absorbing DeepMind Health. In August it’s AI was successful in spotting over 50 sight-threatening eye diseases. Now it has solved another tough science problem. At an international conference in Cancun on Sunday, Deepmind’s latest AI system AlphaFold won the Critical Assessment of Structure Prediction (CASP) competition. The CASP is held every two years, inviting participants to submit models to predict the 3D structure of a protein from the amino acid sequence. The ability to predict a protein’s shape is useful to scientists because it is fundamental to understanding its role within the body. It is also used for diagnosing and treating diseases such as Alzheimer’s, Parkinson’s, Huntington’s and cystic fibrosis. AlphaFold’s SUMZ score was 127.9 (the previous winner SUMZ score was 80.46), achieving what CASP called “unprecedented progress in the ability of computational methods to predict protein structure.” The second team, named Zhang, scored 107.6. How does Deepmind’s AlphaFold work AlphaFold’s team trained a neural network to predict a separate distribution of distances between every pair of residues in a protein. These probabilities were then combined into a score that estimates how accurate a proposed protein structure is. They also trained a separate neural network that uses all distances in aggregate to estimate how close the proposed structure is to the right answer. The scoring functions were used to search the protein landscape to find structures that matched their predictions. They used two distinct methods to construct predictions of full protein structures. The first method repeatedly replaced pieces of a protein structure with new protein fragments. They trained a generative neural network to invent new fragments to improve the score of the proposed protein structure. The second method optimized scores through gradient descent for building highly accurate structures. This technique was applied to entire protein chains rather than to pieces that must be folded separately before being assembled, reducing the complexity of the prediction process. DeepMind Founder and CEO Demis Hassabis celebrated the victory in a tweet. https://twitter.com/demishassabis/status/1069411081603481600 Google CEO Sunder Pichai was also excited about this development on how AI can be used for scientific discovery. https://twitter.com/sundarpichai/status/1069450462284267520 NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale. Google makes major inroads into healthcare tech by absorbing DeepMind Health A new episodic memory-based curiosity model to solve procrastination in RL agents by Google Brain, DeepMind and ETH Zurich
Read more
  • 0
  • 0
  • 22812
article-image-french-data-regulator-cnil-imposes-a-fine-of-50m-euros-against-google-for-failing-to-comply-with-gdpr
Sugandha Lahoti
22 Jan 2019
3 min read
Save for later

French data regulator, CNIL imposes a fine of 50M euros against Google for failing to comply with GDPR

Sugandha Lahoti
22 Jan 2019
3 min read
The French data regulator, National Data Protection Commission (CNIL) has imposed a financial penalty on Google for 50M euros for failing to comply with GDPR. After a thorough analysis, CNIL observed that information provided by Google is not easily accessible for users, neither is it always clear or comprehensive. CNIL started this investigation after receiving complaints from None Of Your Business and La Quadrature du Net. They complained about Google “not having a valid legal basis to process the personal data of the users of its services, particularly for ads personalization purposes.” https://twitter.com/laquadrature/status/1087406112582914050 https://twitter.com/NOYBeu/status/1087458762359824385 Following its own investigation, after the complaints, CNIL also found Google guilty of not validly obtaining proper user consent for ad personalization purposes. Per the committee, Google makes it hard for people to understand how their data is being used by using broad and obscure wordings. For example, CNIL says, “in the section “Ads Personalization”, it is not possible to be aware of the plurality of services, websites and applications involved in these processing operations (Google search, Youtube, Google home, Google maps, Play store, Google pictures…) and therefore of the amount of data processed and combined.” Google is also violating GDPR rules when new Android users set up a new phone and follow Android’s onboarding process. The committee found that when an account is created, the user can modify some options associated with the account by clicking on the ‘More options’. However, the display of the ads personalization is pre-ticked. This violates GDPR’s rule of ‘consent being ambiguous’. Furthermore, GDPR states that the consent is “specific” only if it is given distinctly for each purpose. However Google violates it as before creating an account, the user is asked to tick the boxes « I agree to Google’s Terms of Service» and « I agree to the processing of my information as described above and further explained in the Privacy Policy» in order to create the account. Therefore, the user gives his or her consent in full, for all the processing operations purposes carried out by Google. Netizens feel that 50M euros are far too little to pay as a fine for a massive organization like Google. However, a hacker news user counter argued the statement saying that “Google or any other company does not get to just continue their practices, as usual, the fine is pure "punishment" for the bad behavior in the past. Google would gladly pay them if it meant they could continue their anti-competitive practices, it would just be a cost of doing business. But that's not the point of them. The real teeth are in the changes they will be forced to make.” Twitteratis were also in support of CNIL. https://twitter.com/AlexT_KN/status/1087466073161641984 https://twitter.com/mcfslaw/status/1087552151377797120 https://twitter.com/chesterj1/status/1087387249178750983 https://twitter.com/carlboutet/status/1087471877143085056 A Google spokesperson spoke to Techcrunch with the following statement, “People expect high standards of transparency and control from us. We’re deeply committed to meeting those expectations and the consent requirements of the GDPR. We’re studying the decision to determine our next steps.” Googlers launch industry-wide awareness campaign to fight against forced arbitration EU slaps Google with $5 billion fine for the Android antitrust case Google+ affected by another bug, 52M users compromised, shut down within 90 days
Read more
  • 0
  • 0
  • 22797

article-image-google-is-looking-to-acquire-looker-a-data-analytics-startup-for-2-6-billion-even-as-antitrust-concerns-arise-in-washington
Sugandha Lahoti
07 Jun 2019
5 min read
Save for later

Google is looking to acquire Looker, a data analytics startup for $2.6 billion even as antitrust concerns arise in Washington

Sugandha Lahoti
07 Jun 2019
5 min read
Google has got into an agreement with data analytics startup Looker, and is planning to add it to its Google Cloud division. The acquisition will cost Google $2.6 billion in an all-cash transaction. After the acquisition, Looker organization will report to Frank Bien, who will report to Thomas Kurian, CEO of Google Cloud. Looker is Google’s biggest acquisition since it bought smart home company Nest for $3.2 billion in 2014. Looker's analytics platform uses business intelligence and data visualization tools.  Founded in 2011, Looker has grown rapidly, now helping more than 1,700 companies understand and analyze their data. The company had raised more than $280 million in funding, according to Crunchbase. Looker spans the gap in two areas of data warehousing and Business Intelligence. Looker's platform includes a modeling platform where the user codifies the view of the data using a SQL-like proprietary modeling language (LookML). It complements the modeling language with an end user visualization tool providing the self-service analytics portion. Source Primarily, Looker will help Google Cloud become a complete analytics solution that will help customers in ingesting data to visualizing results and integrating data and insights into their daily workflows. Looker + Google Cloud will be used for: Connecting, analyzing and visualizing data across Google Cloud, Azure, AWS, on-premise databases or ISV SaaS applications Operationalizing BI for everyone with powerful data modeling Augmenting business intelligence from Looker with artificial intelligence from Google Cloud Creating collaborative, data-driven applications for industries with interactive data visualization and machine learning Source Implications of Google + Locker Google and Looker already have a strong existing partnership and 350 common customers (such as Buzzfeed, Hearst, King, Sunrun, WPP Essence, and Yahoo!) and this acquisition will only strength it. “We have many common customers we’ve worked with. One of the great things about this acquisition is that the two companies have known each other for a long time, we share very common culture,” Kurian said in a blog. This is also a significant move by Google to gain market share from Amazon Web Services, which reported $7.7 billion in revenue for the last quarter. Google Cloud has been trailing behind Amazon and Microsoft in the cloud-computing market. Looker’s  acquisition will hopefully make its service more attractive to corporations. Looker’s CEO Frank Bien commented on the partnership as a chance to gain the scale of the Google cloud platform. “What we’re really leveraging here, and I think the synergy with Google Cloud, is that this data infrastructure revolution and what really emerged out of the Big Data trend was very fast, scalable — and now in the cloud — easy to deploy data infrastructure,” he said. What is intriguing is Google’s timing and all-cash payment of this buyout. FCC, DOJ, and Congress are currently looking at bringing potential antitrust on Google and other big tech. According to widespread media reports the US Department of Justice is readying to investigate into Google. It has been reported that the probe would examine whether the tech giant broke antitrust law in the operation of its online and advertisement businesses. According to Paul Gallant, a tech analyst with Cowen who focuses on regulatory issues, “A few years ago, this deal would have been waved through without much scrutiny. We’re in a different world today, and there might well be some buyer’s remorse from regulators on prior tech deals like this.” Public reaction to this accusation has been mixed. While some are happy: https://twitter.com/robgo/status/1136628768968192001 https://twitter.com/holgermu/status/1136639110892810241 Others remain dubious: "With Looker out of the way, the question turns to 'What else is on Google's cloud shopping list?," said Aaron Kessler, a Rayond James analyst in a report. "While the breadth of public cloud makes it hard to list specific targets, vertical specific solutions appear to be a strategic priority for Mr. Kurian." There are also questions on if Google will limit Looker to BigQuery, or at least get the newest features first. https://twitter.com/DanVesset/status/1136672725060243457 Then, there is the issue of whether Google will limit which clouds Looker can be run on. Although the company said, they will continue to support Looker’s multi-cloud strategy and will expand support for multiple analytics tools and data sources to provide customers choice.  Google Cloud will also continue to expand Looker’s investments in product development, go-to-market, and customer success capabilities. Google is also known for killing off its own products and also undermining some of its acquisition. With NEST for example, they said that it will be integrated with Google assistant. The decision was reversed only after a massive public backlash. Looker can also be one such acquisition, which may eventually merge with Google Analytics, Google’s proprietary Web analytics service. The deal expected to close later this year, albeit subject to regulatory approval. Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google and Binomial come together to open-source Basis Universal Texture Format Ian Lance Taylor, Golang team member, adds another perspective to Go being Google’s language
Read more
  • 0
  • 0
  • 22746

article-image-graph-nets-deepminds-library-for-graph-networks-in-tensorflow-and-sonnet
Sunith Shetty
19 Oct 2018
3 min read
Save for later

Graph Nets – DeepMind's library for graph networks in Tensorflow and Sonnet

Sunith Shetty
19 Oct 2018
3 min read
Graph Nets is a new DeepMind’s library used for building graph networks in TensorFlow and Sonnet. Last week a paper Relational inductive biases, deep learning, and graph networks was published on arXiv by researchers from DeepMind, Google Brain, MIT and University of Edinburgh. The paper introduces a new machine learning framework called Graph networks which is expected to bring new innovations in artificial general intelligence realm. What are graph networks? Graph networks can generalize and extend various types of neural networks to perform calculations on the graph. It can implement relational inductive bias, a technique used for reasoning about inter-object relations. The graph networks framework is based on graph-to-graph modules. Each graph’s features are represented in three characteristics: Nodes Edges: Relations between the nodes Global attributes: System-level properties The graph network takes a graph as an input, performs the required operations and calculations from the edge, to the node, and to the global attributes, and then returns a new graph as an output. The research paper argues that graph networks can support two critical human-like capabilities: Relational reasoning: Drawing logical conclusions of how different objects and things relate to one another Combinatorial Generalization: Constructing new inferences, behaviors, and predictions from known building blocks To understand and learn more about graph networks you can refer the official research paper. Graph Nets Graph Nets library can be installed from pip. To install the library, run the following command: $ pip install graph_nets The installation is compatible with Linux/Mac OSX, and Python versions 2.7 and 3.4+ The library includes Jupyter notebook demos which allow you to create, manipulate, and train graph networks to perform operations such as shortest path-finding task, a sorting task, and prediction task. Each demo uses the same graph network architecture, thus showing the flexibility of the approach. You can try out various demos in your browser using Colaboratory. In other words, you don’t need to install anything locally when running the demos in the browser (or phone) via cloud Colaboratory backend. You can also run the demos on your local machine by installing the necessary dependencies. What’s ahead? The concept was released with ideas not only based in artificial intelligence research but also from the computer and cognitive sciences. Graph networks are still an early-stage research theory which does not yet offer any convincing experimental results. But it will be very interesting to see how well graph networks live up to the hype as they mature. To try out the open source library, you can visit the official Github page. In order to provide any comments or suggestions, you can contact graph-nets@google.com. Read more 2018 is the year of graph databases. Here’s why. Why Neo4j is the most popular graph database Pytorch.org revamps for Pytorch 1.0 with design changes and added Static graph support
Read more
  • 0
  • 0
  • 22592
article-image-intel-amd-laptop-chip-partnership
Abhishek Jha
09 Nov 2017
3 min read
Save for later

Frenemies: Intel and AMD partner on laptop chip to keep Nvidia at bay

Abhishek Jha
09 Nov 2017
3 min read
For decades, Intel and AMD have remained bitter archrivals. Today, they find themselves teaming up to thwart a common enemy – Nvidia. As Intel revealed its partnership with Advanced Micro Devices (AMD) over a next-generation notebook chip, it was the first time the two chip giants collaborated since the ‘80s. The proposed chip for thin and lightweight laptops combines an Intel processor and an AMD graphics unit for complex video gaming. The new series of processors will be part of Intel's 8th-generation Core H-series mobile chips, expected to hit the market in the first quarter of 2018. What it means is that Intel’s high-performance x86 cores will get combined with AMD Radeon Graphics into the same processor package using Intel’s EMIB multi-die technology. That is not all. Intel is also bundling the design with built-in High Bandwidth Memory (HBM2) RAM. The new processor, Intel claims, reduces the usual silicon footprint by about 50%. And with a ‘semi-custom’ graphics processor from AMD, enthusiasts can look forward to discrete graphics-level performances for playing games, editing photos or videos, and other tasks that can leverage modern GPU technologies. What does AMD get? Having struggled to remain profitable in recent times, AMD has been losing share in the discrete notebook GPU market. The deal could bring additional revenues with increased market share. Most importantly, the laptops built with the new processors won’t be competing with AMD’s Ryzen chips (which are also designed for ultrathin laptops). AMD clarified on the difference: While the new Intel chips are designed for serious gamers, Ryzen chips (that are due out at the end of the year) can run games but are not specifically designed for that purpose. "Our collaboration with Intel expands the installed base for AMD Radeon GPUs and brings to market a differentiated solution for high-performance graphics,” Scott Herkelman, vice president and general manager of AMD's Radeon Technologies Group, said. "Together we are offering gamers and content creators the opportunity to have a thinner-and-lighter PC capable of delivering discrete performance-tier graphics experiences in AAA games and content creation applications.” While more information will be available in future, the first machines with the new technology are expected to release in the first quarter of 2018. Nvidia's stock fell on the news. While both AMD and Intel saw their shares surging. A rivalry that began when AMD reverse-engineered the Intel 8080 microchip in 1975 could still be far from over, but in graphics, the two have been rather cordial. Despite hating each other since formation, both decided to pick each other as lesser evil over Nvidia. This is why the Intel AMD laptop chip partnership has a definite future. Currently centered around laptop solutions, this could even stretch to desktops, who knows!
Read more
  • 0
  • 0
  • 22452

article-image-gitlab-12-3-releases-with-web-application-firewall-keyboard-shortcuts-productivity-analytics-system-hooks-and-more
Amrata Joshi
23 Sep 2019
3 min read
Save for later

GitLab 12.3 releases with web application firewall, keyboard shortcuts, productivity analytics, system hooks and more

Amrata Joshi
23 Sep 2019
3 min read
Yesterday, the team at GitLab released GitLab 12.3, a DevOps lifecycle tool that provides a Git-repository manager. This release comes with Web Application Firewall, Productivity Analytics, new Environments section and much more. What’s new in GitLab 12.3? Web Application Firewall In GitLab 12.3, the team has shipped the first iteration of the Web Application Firewall that is built in the GitLab SDLC platform. The Web Application Firewall focuses on monitoring and reporting the security concerns related to Kubernetes clusters.  Productivity Analytics  From GitLab 12.3, the team has started releasing Productivity Analytics that will help teams and their leaders in discovering the best practices for better productivity. This release will help in drilling into the data and learning insights for improvements in future. Group level analytics workspace can be used to provide performance insight, productivity, and visibility across multiple projects. Environments section This release comes with “Environments” section in the cluster page that gives an overview of all the projects that are making use of the Kubernetes cluster. License compliance  License Compliance feature can be used to disallow a merger when a blacklisted license is found in a merge request.  Keyboard shortcuts This release comes with the new ‘n’ and ‘p’ keyboard shortcuts that can be used to move to the next and previous unresolved discussions in Merge Requests. System hooks System hooks allow automation by triggering requests whenever a variety of events in GitLab take place. Multiple IP subnets This release introduces the ability to specify multiple IP subnets so instead of specifying a single range, it is now possible for large organizations to restrict incoming traffic to their specific needs. GitLab Runner 12.3 Yesterday, the team also released GitLab Runner 12.3, an open-source project that is used for running CI/CD jobs and sending the results back to GitLab. Audit logs In this release, the audit logs for push events are disabled by default for preventing performance degradation on GitLab instances. Few GitLab users are unhappy as some of the features of this release including Productivity Analytics are available to Premium or Ultimate users only. https://twitter.com/gav_taylor/status/1175798696769916932 To know more about this news, check out the official page. Other interesting news in cloud and networking Kubernetes 1.16 releases with Endpoint Slices, general availability of Custom Resources, and other enhancements DevOps platform for coding, GitLab reached more than double valuation of $2.75 billion than its last funding and way ahead of its IPO in 2020 Istio 1.3 releases with traffic management, improved security, and more!    
Read more
  • 0
  • 0
  • 22405
Modal Close icon
Modal Close icon