Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-pytorch-1-0-is-here-with-jit-c-api-and-new-distributed-packages
Natasha Mathur
10 Dec 2018
4 min read
Save for later

PyTorch 1.0 is here with JIT, C++ API, and new distributed packages

Natasha Mathur
10 Dec 2018
4 min read
It was just two months back when Facebook announced the release of PyTorch 1.0 RC1. Facebook is now out with the stable release of PyTorch 1.0. The latest release, which was announced last week at the NeurIPS conference, explores new features such as JIT, brand new distributed package, and Torch Hub, breaking changes, bug fixes and other improvements. PyTorch is an open source, deep learning python-based framework. “It accelerates the workflow involved in taking AI from research prototyping to production deployment, and makes it easier and more accessible to get started”, reads the announcement page. Let’s now have a look at what’s new in PyTorch 1.0 New Features JIT JIT is a set of compiler tools that is capable of bridging the gap between research in PyTorch and production. JIT enables the creation of models that have the capacity to run without any dependency on the Python interpreter. PyTorch 1.0 offers two ways using which you can make your existing code compatible with the JIT: using  torch.jit or torch.jit.script. Once the models have been annotated, Torch Script code can be optimized and serialized for later use in the new C++ API, which doesn't depend on Python. Brand New distributed package In PyTorch 1.0, the  new torch.distributed package and torch.nn.parallel.DistributedDataParallel comes backed with a brand new re-designed distributed library. Major highlights of the new library are as follows: The new torch.distributed is performance driven and operates entirely asynchronously for all backends such as Gloo, NCCL, and MPI. There are significant Distributed Data-Parallel performance improvements for hosts with slower networks such as Ethernet-based hosts. It comes with async support for all distributed collective operations in the torch.distributed package. C++ frontend [ API unstable] The C++ frontend is a complete C++ interface to the PyTorch backend. It follows the API and architecture of the established Python frontend and is meant to enable research in high performance, low latency and bare metal C++ applications. It also offers equivalents to torch.nn, torch.optim, torch.data and other components of the Python frontend. PyTorch team has released C++ frontend marked as "API Unstable" as part of PyTorch 1.0. This is because although it is ready to use for research applications, it still needs to get more stabilized over future releases. Torch Hub Torch Hub refers to a pre-trained model repository that has been designed to facilitate research reproducibility. Torch Hub offers support for publishing pre-trained models (model definitions and pre-trained weights) to a github repository with the help of hubconf.py file. Once published, users can then load the pre-trained models with the help of torch.hub.load API. Breaking Changes Indexing a 0-dimensional tensor displays an error instead of warn. torch.legacy has been removed. torch.masked_copy_ is removed and hence, use torch.masked_scatter_ instead. torch.distributed: the TCP backend has been removed. It is recommended to use Gloo and MPI backends for CPU collectives and NCCL backend for GPU collectives. torch.tensor function with a Tensor argument can now return a detached Tensor (i.e. a Tensor where grad_fn is None) in PyTorch 1.0. torch.nn.functional.multilabel_soft_margin_loss now returns Tensors of shape (N,) instead of (N, C). This is to match the behaviour of torch.nn.MultiMarginLoss and it is also more numerically stable. Support for C extensions has been removed in PyTorch 1.0. Torch.utils.trainer has been deprecated. Bug fixes torch.multiprocessing has been fixed and now correctly handles CUDA tensors, requires_grad settings, and hooks. Memory leak during packing in tuples has been fixed. RuntimeError: storages that don't support slicing when loading models are saved with PyTorch 0.3, has been fixed. The issue with calculated output sizes of torch.nn.Conv modules with stride and dilation have been fixed. torch.dist has been fixed for infinity, zero and minus infinity norms. torch.nn.InstanceNorm1d has been fixed and now can correctly accept 2-dimensional inputs. torch.nn.Module.load_state_dict showed an incorrect error message that has been fixed. broadcasting bug in torch.distributions.studentT.StudentT has been fixed. Other Changes “Advanced Indexing" performance has been considerably improved on CPU as well as GPU. torch.nn.PReLU speed has been improved on both CPU and GPU. Printing large tensors has become faster. N-dimensional empty tensors have been added in PyTorch 1.0, which allows tensors with 0 elements to have arbitrary number of dimensions. They also support indexing and other torch operations. For more information, check out the official release notes. Can a production-ready Pytorch 1.0 give TensorFlow a tough time? Pytorch.org revamps for Pytorch 1.0 with design changes and added Static graph support What is PyTorch and how does it work?
Read more
  • 0
  • 0
  • 22885

article-image-valve-announces-half-life-alyx-its-first-flagship-vr-game
Savia Lobo
19 Nov 2019
3 min read
Save for later

Valve announces Half-Life: Alyx, its first flagship VR game

Savia Lobo
19 Nov 2019
3 min read
Yesterday, Valve Corporation, the popular American video game developer, announced the Half-Life: Alyx, the first new game in the popular Half-Life series in over a decade. The company tweeted that it will unveil the first look on Thursday, 21st November 2019, at 10 am Pacific Time. https://twitter.com/valvesoftware/status/1196566870360387584 Half-Life: Alyx, a brand-new game in the Half-Life universe, is designed exclusively for PC virtual reality systems (Valve Index, Oculus Rift, HTC Vive, Windows Mixed Reality). Talking about Valve’s history in PC games, it has created some of the most influential and critically games ever made. However, “Valve has famously never finished either of its Half-Life supposed trilogies of games. After Half-Life and Half-Life 2, the company created Half-Life: Episode 1 and Half-Life: Episode 2, but no third game in the series,” the Verge reports. Ars Technica reveals, “The game's name confirms what has been loudly rumored for months: that you will play this game from the perspective of Alyx Vance, a character introduced in 2004's Half-Life 2. Instead of stepping forward in time, HLA will rewind to the period between the first two mainline Half-Life games.” “A data leak from Valve's Source 2 game engine, as uncovered in September by Valve News Network, pointed to a new control system labeled as the "Grabbity Gloves" in its codebase. Multiple sources have confirmed that this is indeed a major control system in HLA,” Ars Technica claims. These Grabbity gloves can also be described as ‘Magnet gloves’, which allow pointing out and attracting distant objects to your hands. Valve has already announced plans to support all major VR PC systems for its next VR game, and these new gloves seem like the right system to scale to whatever controllers that would come to VR. Many gamers are excited to check out this Half-life version and are also looking forward to whether the company really stands up to what it says. A user on Hacker News commented, “Wonder what Valve is doubling down with this title? It seems like the previous games were all ground-breaking narratives, but with most of the storytellers having left in the last few years, I'd be curious to see what makes this different than your standard VR games.” Another user on Hacker News commented, “From the tech side it was the heavy, and smart, use of scripting that made HL1 stand out. With HL2 it was the added physics engine trough the change to Source, back then that used to be a big deal and whole gameplay mechanics revolve around that (gravity gun). In that context, I do not really consider it that surprising for the next HL project to focus on VR because even early demos of that combination looked already very promising 5 years ago” We will update this space after the Half-Life: Alyx is unveiled on Thursday. To know more about the announcement in detail, read Ars Technica’s complete coverage. Valve reveals new Index VR Kit with detail specs and costs upto $999 Why does Oculus CTO John Carmack prefer 2D VR interfaces over 3D Virtual Reality interfaces? Oculus Rift S: A new VR with inside-out tracking, improved resolution and more!
Read more
  • 0
  • 0
  • 22884

article-image-azure-devops-report-how-a-bug-caused-sqlite3-for-python-to-go-missing-from-linux-images
Vincy Davis
03 Jul 2019
3 min read
Save for later

Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images

Vincy Davis
03 Jul 2019
3 min read
Yesterday, Youhana Naseim the Group Engineering Manager at Azure Pipelines provided a post-mortem of the bug, due to which a sqlite3 module in the Ubuntu 16.04 image for Python went missing from May 14th. The Azure DevOps team identified the bug on May 31st and fixed it on June 26th. Naseim apologized to all the affected customers for the delay in detecting and fixing the issue. https://twitter.com/hawl01475954/status/1134053763608530945 https://twitter.com/ProCode1/status/1134325517891411968 How Azure DevOps team detected and fixed the issue The Azure DevOps team upgraded the versions of Python, which were included in the Ubuntu 16.04 image with M151 payload. These versions of Python’s build scripts consider sqlite3 as an optional module, hence the builds were carried out successfully despite the missing sqlite3 module. Naseim says that, “While we have test coverage to check for the inclusion of several modules, we did not have coverage for sqlite3 which was the only missing module.” The issue was first reported by a user who received the M151 deployment containing the bug via the Azure Developer Community on May 20th. But the Azure support team escalated, only after receiving more reports during the M152 deployment on May 31st. The support team then proceed with the M153 deployment, after posting a workaround for the issue, as the M152 deployment would take at least 10 days. Further, due to an internal miscommunication, the support team didn’t start the M153 deployment to Ring 0 until June 13th. [box type="shadow" align="" class="" width=""]To safeguard the production environment, Azure DevOps rolls out changes in a progressive and controlled manner via the ring model of deployments. [/box]To safeguard the production environment, Azure DevOps rolls out changes in a progressive and controlled manner via the ring model of deployments. The team then resumed deployment to Ring 1 on June 17th and reached Ring 2 by June 20th. Finally, after a few failures, the team fully deployed the M153 deployment by June 26th. Azure’s future workarounds to deliver timely fixes The Azure team has set out plans to make improvements to their deployment and hotfix processes with an aim to deliver timely fixes. Their long term plan is to provide customers with the ability to choose to revert to the previous image as a quick workaround for issues introduced in new images. The detailed medium and short plans are as given below: Medium-term plans Add the ability to better compare what changed on the images to catch any unexpected discrepancies that our test suite might miss. Increase the speed and reliability of deployment process. Short term plans Build a full CI Pipeline for image generation for verifying images daily. Add test coverage for all modules in the Python standard library including sqlite3. Improving the support team's communication with the support team to escalate issues more quickly. Add telemetry, so it would be possible to detect and diagnose issues more quickly. Implement measures, which will enable reverting to prior image versions quickly and mitigate issues faster. Visit the Azure Devops status site for more details. Read More Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32
Read more
  • 0
  • 0
  • 22850

article-image-microsoft-announces-decentralized-identity-in-partnership-with-dif-and-w3c-credentials-community-group
Bhagyashree R
12 Oct 2018
3 min read
Save for later

Microsoft announces ‘Decentralized Identity’ in partnership with DIF and W3C Credentials Community Group

Bhagyashree R
12 Oct 2018
3 min read
Yesterday, Microsoft published a white paper on Decentralized Identity (DID) solution. These identities are user-generated, self-owned, globally unique identifiers rooted in decentralized systems. Over the past 18 months, Microsoft has been working towards building a digital identity system using blockchain and other distributed ledger technologies. With these identities aims to enhance personal privacy, security, and control. Microsoft has been actively collaborating with members of the Decentralized Identity Foundation (DIF), the W3C Credentials Community Group, and the wider identity community. They are working with these groups to identify and develop critical standards. Together they plan to establish a unified, interoperable ecosystem that developers and businesses can rely on to build more user-centric products, applications, and services. Why decentralized identity (DID) is needed? Nowadays, people use digital identity at work, at home, and across every app, service, and device. Access to these digital identities such as email addresses and social network IDs can be removed at any time by the email provider, social network provider, or other external parties. Users also give permissions to numerous apps and devices, which calls for a high degree of vigilance of tracking who has access to what information. This standards-based decentralized identity system empowers users and organizations to have greater control over their data. This system addresses the problem of users granting broad consent to countless apps and services. It provides them a secure encrypted digital hub where they can store their identity data and easily control access to it. What it means for users, developers, and organizations? Benefits for users It enables all users to own and control their identity Provides secure experiences that incorporate privacy by design Design user-centric apps and services Benefits for developers It allows developers to provide users personalized experiences while respecting their privacy Enables developers to participate in a new kind of marketplace, where creators and consumers exchange directly Benefits for organizations Organizations can deeply engage with users while minimizing privacy and security risks Provides a unified data protocol to organizations to transact with customers, partners, and suppliers Improves transparency and auditability of business operations To know more about decentralized identity, read the white paper published by Microsoft. Microsoft joins the Open Invention Network community, making 60,000 of its patents accessible to fellow members Microsoft invests in Grab; together aim to conquer the Southeast Asian on-demand services market with Azure’s Intelligent Cloud Microsoft announces Project xCloud, a new Xbox game streaming service, on the heels of Google’s Stream news last week
Read more
  • 0
  • 0
  • 22847

article-image-net-core-3-0-available-c-8-asp-net-core-3-general-availability-ef-core-3-ef-6-3
Sugandha Lahoti
24 Sep 2019
5 min read
Save for later

.NET Core 3.0 is now available with C# 8, F# 4.7, ASP.NET Core 3.0 and general availability of EF Core 3.0 and EF 6.3

Sugandha Lahoti
24 Sep 2019
5 min read
Yesterday, at the ongoing .NET Conference 2019, NET Core 3.0 was released along with ASP.NET Core 3.0 and Blazor updates. C#8 and F# 4.7 is also a part of this release. Entity Framework Core 3.0 and Entity Framework 6.3 are now generally available. What’s new in .NET Core 3.0 .NET Core 3.0 now includes adding Windows Forms and WPF (Windows Presentation Foundation), adding new JSON APIs, support for ARM64, and improving performance across the board. Here are the key highlights: Support for Windows Desktop apps .NET Core supports Windows Desktop apps for both Windows Forms and WPF (and open source). The WPF designer is part of Visual Studio 2019 16.3, which was also released yesterday. This includes new templates and an updated XAML designer and XAML Hot Reload. The Windows Forms designer is still in preview and available as a VSIX download. Support for C# 8 and F# 4.7 C# 8 was released last week and adds async streams, range/index, more patterns, and nullable reference types. F# 4.7 was released in parallel to .NET Core 3.0 with a focus on infrastructural changes to the compiler and core library and some relaxations on previously onerous syntax requirements. It also includes support for LangVersion and ships with nameof and opening of static classes in preview. Read Also: Getting started with F# for .Net Core application development [Tutorial] .NET Core apps now have executables by default This means apps can now be launched with an app-specific executable, like myapp or ./myapp, depending on the operating system. Support for new JSON APIs High-performance JSON APIs have been added, for reader/writer, object model, and serialization scenarios. These APIs minimize allocations, resulting in faster performance, and much less work for the garbage collector. Support for Raspberry Pi and Linux ARM64 chips These chips enable IoT development with the remote Visual Studio debugger. You can deploy apps that listen to sensors, and print messages or images on a display, all using the new GPIO APIs. ASP.NET can be used to expose data as an API or as a site that enables configuring an IoT device. Read Also: .NET Core 3.0 Preview 6 is available, packed with updates to compiling assemblies. .NET Core 3.0 is a ‘current’ release and will be available with RHEL 8. It will be superseded by .NET Core 3.1, targeted for November 2019. If you're on .NET Core 2.2 you have until the end of the year to update to 3.1, which will be LTS. You can read a detailed report of all .NET Core 3.0 features. What's new in ASP.NET Core 3.0 ASP.NET Core 3.0 is also released in parallel to .NET Core for building web apps. Notably, ASP.NET Core 3.0 has Blazor, a new framework in ASP.NET Core for building interactive client-side web UI with .NET. With Blazor, you can create rich interactive UIs using C# instead of JavaScript. You can also share server-side and client-side app logic written in .NET. Blazor renders the UI as HTML and CSS for wide browser support, including mobile browsers. Other updates in ASP.NET Core 3.0: You can now create high-performance backend services with gRPC. SignalR now has support for automatic reconnection and client-to-server streaming. Endpoint routing integrated through the framework. HTTP/2 now enabled by default in Kestrel. Authentication support for Web APIs and single-page apps integrated with IdentityServer Support for certificate and Kerberos authentication. New generic host sets up common hosting services like dependency injection (DI), configuration, and logging. New Worker Service template for building long-running services. For a full list of features, visit Microsoft Docs. Entity Framework Core 3.0 and Entity Framework 6.3 are now generally available with C# 8 As a part of the .NET Core 3.0 release, Entity Framework Core 3.0 and Entity Framework 6.3 are now generally available on nuget.org. New updates in EF Core 3.0 include: Newly architectured LINQ provider to translate more query patterns into SQL, generating efficient queries in more cases, and preventing inefficient queries from going undetected. Cosmos DB support to help developers familiar with the EF programming model to easily target Azure Cosmos DB as an application database. EF 6.3 brings the following new improvements to the table: With support for .NET Core 3.0, the EF 6.3 runtime package now targets .NET Standard 2.1 in addition to .NET Framework 4.0 and 4.5. Support for SQL Server hierarchyid Improved compatibility with Roslyn and NuGet PackageReference Added the ef6.exe utility for enabling, adding, scripting, and applying migrations from assemblies. This replaces migrate.exe .NET Core 3.0 is a major new release of .NET Core. Developers have widely appreciated the announcement. https://twitter.com/dotMorten/status/1176172319598759938 https://twitter.com/robertmclaws/status/1176206536546357248 https://twitter.com/JaypalPachore/status/1176200191021473792 Interested developers can start updating their existing projects to target .NET Core 3.0. The release is compatible with earlier .NET Core versions which makes updating easier. Other interesting news in Tech Introducing Weld, a runtime written in Rust and LLVM for cross-library optimizations. Chrome 78 beta brings the CSS Properties and Values API, the native File Systems API and more. LLVM 9 releases with official RISC-V target support, asm goto, Clang 9 and more.
Read more
  • 0
  • 0
  • 22841

article-image-ibm-open-sources-power-isa-and-other-chips-brings-openpower-foundation-under-the-linux-foundation
Vincy Davis
22 Aug 2019
3 min read
Save for later

IBM open-sources Power ISA and other chips; brings OpenPOWER foundation under the Linux Foundation

Vincy Davis
22 Aug 2019
3 min read
Yesterday, IBM made a huge announcement to seize its commitment to the open hardware movement. At the ongoing Linux Foundation Open Source Summit 2019, Ken King, the general manager for OpenPower at IBM disclosed that the Power Series chipmaker is open-sourcing their Power Instruction Set Architecture (ISA) and other chips for developers to build new hardware.  IBM wants the open community members to take advantage of “POWER's enterprise-leading capabilities to process data-intensive workloads and create new software applications for AI and hybrid cloud built to take advantage of the hardware's unique capabilities,'' says IBM.  At the Summit, King also announced that the OpenPOWER Foundation will be integrated with the Linux Foundation. Launched in 2013, IBM’s OpenPOWER Foundation is a collaboration of Power ISA-based products and has the support of 350 members, including IBM, Google, Hitachi, and Red Hat.  By moving the OpenPOWER foundation under the Linux Foundation, IBM wants the developer community to try the Power-based systems without paying any fee. It will motivate developers to customize their OpenPower chips for applications like AI and hybrid cloud by taking advantage of POWER’s rich feature set. “With our recent Red Hat acquisition and today’s news, POWER is now the only architecture—and IBM the only processor vendor—that can boast of a completely open systems stack, from the foundation of the processor instruction set and firmware all the way through the software,” King adds. Read More: Red Hat joins the RISC-V foundation as a Silver level member The Linux Foundation supports open source projects by providing financial and intellectual resources, infrastructure, services, events, and training. Hugh Blemings, the Executive Director of OpenPOWER Foundation said in a blog post that, “The OpenPOWER Foundation will now join projects and organizations like OpenBMC, CHIPS Alliance, OpenHPC and so many others within the Linux Foundation.” He concludes, “The Linux Foundation is the premier open-source group, and we’re excited to be working more closely with them.” Many developers are of the opinion that IBM open sourcing the ISA is a decision taken too late. A user on Hacker News  comments, “28 years after introduction. A bit late.” Another user says, “I'm afraid they are doing it for at least 10 years too late” Another comment reads, “might be too little too late. I used to be powerpc developer myself, now nearly all the communities, the ecosystem, the core developers are gone, it's beyond repair, sigh” Many users also think that IBM’s announcements are a direct challenge to the RISC-V community. A Redditor comments, “I think the most interesting thing about this is that now RISC-V has a direct competitor, and I wonder how they'll react to IBM's change.” Another user says, “Symbolic. Risc-V, is more open, and has a lot of implementations already, many of them open. Sure, power is more about high performance computing, but it doesn't change that much. Still, nice addition. It doesn't really change substantially anything about Power or it's future adoption” You can visit the IBM newsroom, for more information on the announcements. Black Hat USA 2019 conference Highlights: IBM’s ‘warshipping’, OS threat intelligence bots, Apple’s $1M bug bounty programs and much more! IBM continues to layoff older employees solely to attract Millennials to be at par with Amazon and Google IBM halt sales of Watson AI tool for drug discovery amid tepid growth: STAT report
Read more
  • 0
  • 0
  • 22834
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-data-governance-in-operations-needed-to-ensure-clean-data-for-ai-projects-from-ai-trends
Matthew Emerick
15 Oct 2020
5 min read
Save for later

Data Governance in Operations Needed to Ensure Clean Data for AI Projects from AI Trends

Matthew Emerick
15 Oct 2020
5 min read
By AI Trends Staff Data governance in data-driven organizations is a set of practices and guidelines that define where responsibility for data quality lives. The guidelines support the operation’s business model, especially if AI and machine learning applications are at work.  Data governance is an operations issue, existing between strategy and the daily management of operations, suggests a recent account in the MIT Sloan Management Review.  “Data governance should be a bridge that translates a strategic vision acknowledging the importance of data for the organization and codifying it into practices and guidelines that support operations, ensuring that products and services are delivered to customers,” stated author Gregory Vial is an assistant professor of IT at HEC Montréal. To prevent data governance from being limited to a plan that nobody reads, “governing” data needs to be a verb and not a noun phrase as in “data governance.” Vial writes, “The difference is subtle but ties back to placing governance between strategy and operations — because these activities bridge and evolve in step with both.” Gregory Vial, assistant professor of IT at HEC Montréal An overall framework for data governance was proposed by Vijay Khatri and Carol V. Brown in a piece in Communications of the ACM published in 2010. The two suggested the strategy is based on five dimensions that represent a combination of structural, operational and relational mechanisms. The five dimensions are: Principles at the foundation of the framework that relate to the role of data as an asset for the organization; Quality to define the requirements for data to be usable and the mechanisms in place to assess that those requirements are met; Metadata to define the semantics crucial for interpreting and using data — for example, those found in a data catalog that data scientists use to work with large data sets hosted on a data lake. Accessibility to establish the requirements related to gaining access to data, including security requirements and risk mitigation procedures; Life cycle to support the production, retention, and disposal of data on the basis of organization and/or legal requirements. “Governing data is not easy, but it is well worth the effort,” stated Vial. “Not only does it help an organization keep up with the changing legal and ethical landscape of data production and use; it also helps safeguard a precious strategic asset while supporting digital innovation.” Master Data Management Seen as a Path to Clean Data Governance Once the organization commits to data quality, what’s the best way to get there? Naturally entrepreneurs are in position to step forward with suggestions. Some of them are around master data management (MDM), a discipline where business and IT work together to ensure the accuracy and consistency of the enterprise’s master data assets. Organizations starting down the path with AI and machine learning may be tempted to clean the data that feeds a specific application project, a costly approach in the long run suggests one expert.   “A better, more sustainable way is to continuously cure the data quality issues by using a capable data management technology. This will result in your training data sets becoming rationalized production data with the same master data foundation,” suggests Bill  O’Kane, author of a recent account from tdwi.org on master data management. Formerly an analyst with Gartner, O’Kane is now the VP and MDM strategist at Profisee, a firm offering an MDM solution. If the data feeding into the AI system is not unique, accurate, consistent and time, the models will not produce reliable results and are likely to lead to unwanted business outcomes. These could include different decisions being made on two customer records thought to represent different people, but in fact describe the same person. Or, recommending a product to a customer that was previously returned or generated a complaint. Perceptilabs Tries to Get in the Head of the Machine Learning Scientist Getting inside the head of a machine learning scientist might be helpful in understanding how a highly trained expert builds and trains complex mathematical models. “This is a complex time-consuming process, involving thousands of lines of code,” writes Martin Isaksson, co-founder and CEO of Perceptilabs, in a recent account in VentureBeat. Perceptilabs offers a product to help automation the building of machine learning models, what it calls a “GUI for TensorFlow.”. Martin Isaksson, co-founder and CEO, Perceptilabs “As AI and ML took hold and the experience levels of AI practitioners diversified, efforts to democratize ML materialized into a rich set of open source frameworks like TensorFlow and datasets. Advanced knowledge is still required for many of these offerings, and experts are still relied upon to code end-to-end ML solutions,” Isaksson wrote.. AutoML tools have emerged to help adjust parameters and train machine learning models so that they are deployable. Perceptilabs is adding a visual modeler to the mix. The company designed its tool as a visual API on top of TensorFlow, which it acknowledges as the most popular ML framework. The approach gives developers access to the low-level TensorFlow API and the ability to pull in other Python modules. It also gives users transparency into how the model is architected and a view into how it performs. Read the source articles in the MIT Sloan Management Review, Communications of the ACM,  tdwi.org and VentureBeat.
Read more
  • 0
  • 0
  • 22819

article-image-darpa-on-the-hunt-to-catch-deepfakes-with-its-ai-forensic-tools-underway
Natasha Mathur
08 Aug 2018
5 min read
Save for later

DARPA on the hunt to catch deepfakes with its AI forensic tools underway

Natasha Mathur
08 Aug 2018
5 min read
The U.S. Defense Advanced Research Projects Agency ( DARPA) has come out with AI-based forensic tools to catch deepfakes, first reported by MIT technology review yesterday. According to MIT Technology Review, the development of more tools is currently under progress to expose fake images and revenge porn videos on the web. DARPA’s deepfake mission project was announced earlier this year. Alec Baldwin on Saturday Night Live face swapped with Donald Trump As mentioned in the MediFor blog post, “While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns”. This is one of the major reasons why DARPA Forensics experts are keen on finding methods to detect deepfakes videos and images How did deepfakes originate? Back in December 2017, a Reddit user named “DeepFakes” posted extremely real-looking explicit videos of celebrities. He used deep learning techniques to insert celebrities’ faces into adult movies. Using Deep learning, one can combine and superimpose existing images and videos onto original images or videos to create realistic-seeming fake videos. As per the MIT technology review,“Video forgeries are done using a machine-learning technique -- generative modeling -- lets a computer learn from real data before producing fake examples that are statistically similar”. Video tampering is done using two neural networks -- generative adversarial networks which work in conjunction “to produce ever more convincing fakes”. Why are deepfakes toxic? An app named FakeApp was released earlier this year which helped create deepfakes quite easily. FakeApp uses neural networking tools developed by Google's AI division. The app trains itself to perform image-recognition tasks using trial and error. Ever since its release, the app has been downloaded more than 120,000 times. In fact, there are tutorials online on how to create deepfakes. Apart from this, there are regular requests on deepfake forums, asking users for help in creating face-swap porn videos of ex-girlfriends, classmates, politicians, celebrities, and teachers. Deepfakes is even be used to create fake news such as world leaders declaring war on a country. The toxic potential of this technology has led to a growing concern as deepfakes have become a powerful tool for harassing people. Once deepfakes found their way on the world wide web, many websites such as Twitter and PornHub, banned them from being posted on their platforms. Reddit also announced a ban on deepfakes, earlier this year, killing The “deepfakes” subreddit which had more than 90,000 subscribers, entirely. MediFor: DARPA’s AI weapon to counter deepfakes DARPA’s Media Forensics group, also known as MediFor, works in a group along with other researchers is set on developing AI tools for deepfakes. It is currently focusing on four techniques to catch the audiovisual discrepancies present in a forged video. This includes analyzing lip sync, detecting speaker inconsistency, scene inconsistency and content insertions. One technique comes from a team led by Professor Siwei Lyu of SUNY Albany. Lyu mentioned that they “generated about 50 fake videos and tried a bunch of traditional forensics methods. They worked on and off, but not very well”. As the deepfakes are created using static images, Lyu noticed that that the faces in deepfakes videos rarely blink and that eye-movement, if present, is quite unnatural. An academic paper titled "In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking," by Yuezun Li, Ming-Ching Chang and Siwei Lyu explains a method to detect forged videos. It makes use of Long-term Recurrent Convolutional Networks (LRCN). According to the research paper, people, on an average, blink about 17 times a minute or 0.283 times per second. This rate increases with conversation and decreases while reading. There are a lot of other techniques which are used for eye blink detection such as detecting the eye state by computing the vertical distance between eyelids, measuring eye aspect ratio ( EAR ), and using the convolutional neural network (CNN) to detect open and closed eye states. But, Li, Chang, and Lyu use a different approach. They rely on  Long-term Recurrent Convolutional Networks (LRCN) model. They first perform pre-processing to identify facial features and normalize the video frame orientation. Then, they pass cropped eye images into the LRCN for evaluation. This technique is quite effective. It is also better as compared to other approaches, with a reported accuracy of 0.99 (LRCN) compared to 0.98 (CNN) and 0.79 (EAR). However, Lyu says that a skilled video editor can fix the non-blinking deepfakes by using images that shows blinking eyes. But, Lyu’s team has a secret effective technique in the works to fix even that, though he hasn’t divulged any details. Others in DARPA are on the look-out for similar cues such as strange head movements, odd eye color, etc as these little details are leading the team even closer to detection of deepfakes. As mentioned in the MIT Technology review post, “the arrival of these forensics tools may simply signal the beginning of an AI-powered arms race between video forgers and digital sleuths” and how”. Also, MediFor states that “If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video”. Deepfakes need to stop and the U.S. Defense Advanced Research Projects Agency ( DARPA) seems all set to fight against them. Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news A new WPA/WPA2 security attack in town: Wi-fi routers watch out! YouTube has a $25 million plan to counter fake news and misinformation  
Read more
  • 0
  • 25
  • 22814

article-image-deepminds-alphafold-is-successful-in-predicting-the-3d-structure-of-a-protein-making-major-inroads-for-ai-use-in-healthcare
Sugandha Lahoti
04 Dec 2018
3 min read
Save for later

Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare

Sugandha Lahoti
04 Dec 2018
3 min read
Google’s DeepMind is turning its attention to using AI for science and healthcare. This statement is strengthened by the fact that last month, Google made major inroads into healthcare tech by absorbing DeepMind Health. In August it’s AI was successful in spotting over 50 sight-threatening eye diseases. Now it has solved another tough science problem. At an international conference in Cancun on Sunday, Deepmind’s latest AI system AlphaFold won the Critical Assessment of Structure Prediction (CASP) competition. The CASP is held every two years, inviting participants to submit models to predict the 3D structure of a protein from the amino acid sequence. The ability to predict a protein’s shape is useful to scientists because it is fundamental to understanding its role within the body. It is also used for diagnosing and treating diseases such as Alzheimer’s, Parkinson’s, Huntington’s and cystic fibrosis. AlphaFold’s SUMZ score was 127.9 (the previous winner SUMZ score was 80.46), achieving what CASP called “unprecedented progress in the ability of computational methods to predict protein structure.” The second team, named Zhang, scored 107.6. How does Deepmind’s AlphaFold work AlphaFold’s team trained a neural network to predict a separate distribution of distances between every pair of residues in a protein. These probabilities were then combined into a score that estimates how accurate a proposed protein structure is. They also trained a separate neural network that uses all distances in aggregate to estimate how close the proposed structure is to the right answer. The scoring functions were used to search the protein landscape to find structures that matched their predictions. They used two distinct methods to construct predictions of full protein structures. The first method repeatedly replaced pieces of a protein structure with new protein fragments. They trained a generative neural network to invent new fragments to improve the score of the proposed protein structure. The second method optimized scores through gradient descent for building highly accurate structures. This technique was applied to entire protein chains rather than to pieces that must be folded separately before being assembled, reducing the complexity of the prediction process. DeepMind Founder and CEO Demis Hassabis celebrated the victory in a tweet. https://twitter.com/demishassabis/status/1069411081603481600 Google CEO Sunder Pichai was also excited about this development on how AI can be used for scientific discovery. https://twitter.com/sundarpichai/status/1069450462284267520 NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale. Google makes major inroads into healthcare tech by absorbing DeepMind Health A new episodic memory-based curiosity model to solve procrastination in RL agents by Google Brain, DeepMind and ETH Zurich
Read more
  • 0
  • 0
  • 22812

article-image-french-data-regulator-cnil-imposes-a-fine-of-50m-euros-against-google-for-failing-to-comply-with-gdpr
Sugandha Lahoti
22 Jan 2019
3 min read
Save for later

French data regulator, CNIL imposes a fine of 50M euros against Google for failing to comply with GDPR

Sugandha Lahoti
22 Jan 2019
3 min read
The French data regulator, National Data Protection Commission (CNIL) has imposed a financial penalty on Google for 50M euros for failing to comply with GDPR. After a thorough analysis, CNIL observed that information provided by Google is not easily accessible for users, neither is it always clear or comprehensive. CNIL started this investigation after receiving complaints from None Of Your Business and La Quadrature du Net. They complained about Google “not having a valid legal basis to process the personal data of the users of its services, particularly for ads personalization purposes.” https://twitter.com/laquadrature/status/1087406112582914050 https://twitter.com/NOYBeu/status/1087458762359824385 Following its own investigation, after the complaints, CNIL also found Google guilty of not validly obtaining proper user consent for ad personalization purposes. Per the committee, Google makes it hard for people to understand how their data is being used by using broad and obscure wordings. For example, CNIL says, “in the section “Ads Personalization”, it is not possible to be aware of the plurality of services, websites and applications involved in these processing operations (Google search, Youtube, Google home, Google maps, Play store, Google pictures…) and therefore of the amount of data processed and combined.” Google is also violating GDPR rules when new Android users set up a new phone and follow Android’s onboarding process. The committee found that when an account is created, the user can modify some options associated with the account by clicking on the ‘More options’. However, the display of the ads personalization is pre-ticked. This violates GDPR’s rule of ‘consent being ambiguous’. Furthermore, GDPR states that the consent is “specific” only if it is given distinctly for each purpose. However Google violates it as before creating an account, the user is asked to tick the boxes « I agree to Google’s Terms of Service» and « I agree to the processing of my information as described above and further explained in the Privacy Policy» in order to create the account. Therefore, the user gives his or her consent in full, for all the processing operations purposes carried out by Google. Netizens feel that 50M euros are far too little to pay as a fine for a massive organization like Google. However, a hacker news user counter argued the statement saying that “Google or any other company does not get to just continue their practices, as usual, the fine is pure "punishment" for the bad behavior in the past. Google would gladly pay them if it meant they could continue their anti-competitive practices, it would just be a cost of doing business. But that's not the point of them. The real teeth are in the changes they will be forced to make.” Twitteratis were also in support of CNIL. https://twitter.com/AlexT_KN/status/1087466073161641984 https://twitter.com/mcfslaw/status/1087552151377797120 https://twitter.com/chesterj1/status/1087387249178750983 https://twitter.com/carlboutet/status/1087471877143085056 A Google spokesperson spoke to Techcrunch with the following statement, “People expect high standards of transparency and control from us. We’re deeply committed to meeting those expectations and the consent requirements of the GDPR. We’re studying the decision to determine our next steps.” Googlers launch industry-wide awareness campaign to fight against forced arbitration EU slaps Google with $5 billion fine for the Android antitrust case Google+ affected by another bug, 52M users compromised, shut down within 90 days
Read more
  • 0
  • 0
  • 22797
article-image-homebrew-2-2-releases-with-support-for-macos-catalina
Vincy Davis
28 Nov 2019
3 min read
Save for later

Homebrew 2.2 releases with support for macOS Catalina

Vincy Davis
28 Nov 2019
3 min read
Yesterday, the project manager of Homebrew, Mike McQuaid, announced the release of Homebrew 2.2. This is the third release of Homebrew this year. Some of the major highlights of this new version include support to macOS Catalina, faster implementations of  HOMEBREW_AUTO_UPDATE_SECS and brew upgrade’s post-install dependent checking, and more. Read More: After Red Hat, Homebrew removes MongoDB from core formulas due to its Server Side Public License adoption New key features in Homebrew 2.2 Homebrew will now support macOS Catalina (10.15), support to macOS Sierra (10.12) and older are unsupported The speed of the no-op case for HOMEBREW_AUTO_UPDATE_SECS has become extremely fast and defaults to 5 minutes instead of 1 The brew upgrade will no longer give an unsuccessful error code if the formula is up-to-date. Homebrew upgrade’s post-install dependent checking is now exceedingly faster and reliable. Homebrew on Linux has updated and raised its minimum requirements. Starting from Homebrew 2.2, the software package management system will use OpenSSL 1.1. The Homebrew team has disabled the brew tap-pin since it was buggy and not much used by Homebrew maintainers. It will stop supporting Python 2.7 by the end of 2019 as it will reach EOL. Read More: Apple’s MacOS Catalina in major turmoil as it kills iTunes and drops support for 32 bit applications Many users are excited about this release and have appreciated the maintainers of Homebrew for their efforts. https://twitter.com/DVJones89/status/1199710865160843265 https://twitter.com/dirksteins/status/1199944492868161538 A user on Hacker News comments, “While Homebrew is perhaps technically crude and somewhat inflexible compared to other and older package managers, I think it deserves real credit for being so easy to add packages to. I contributed Homebrew packages after a few weeks of using macOS, while I didn't contribute a single package in the ten years I ran Debian. I'm also impressed by the focus of the maintainers and their willingness for saying no and cutting features. We need more of that in the programming field. Homebrew is unashamedly solely for running the standard configuration of the newest version of well-behaved programs, which covers at least 90% of my use cases. I use Nix when I want something complicated or nonstandard.” To know about the features in detail, head over to Hombrew’s official page. Announcing Homebrew 2.0.0! Homebrew 1.9.0 released with periodic brew cleanup, beta support for Linux, Windows and much more! Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks? ActiveState adds thousands of curated Python packages to its platform Firefox Preview 3.0 released with Enhanced Tracking Protection, Open links in Private tab by default and more
Read more
  • 0
  • 0
  • 22763

article-image-google-is-looking-to-acquire-looker-a-data-analytics-startup-for-2-6-billion-even-as-antitrust-concerns-arise-in-washington
Sugandha Lahoti
07 Jun 2019
5 min read
Save for later

Google is looking to acquire Looker, a data analytics startup for $2.6 billion even as antitrust concerns arise in Washington

Sugandha Lahoti
07 Jun 2019
5 min read
Google has got into an agreement with data analytics startup Looker, and is planning to add it to its Google Cloud division. The acquisition will cost Google $2.6 billion in an all-cash transaction. After the acquisition, Looker organization will report to Frank Bien, who will report to Thomas Kurian, CEO of Google Cloud. Looker is Google’s biggest acquisition since it bought smart home company Nest for $3.2 billion in 2014. Looker's analytics platform uses business intelligence and data visualization tools.  Founded in 2011, Looker has grown rapidly, now helping more than 1,700 companies understand and analyze their data. The company had raised more than $280 million in funding, according to Crunchbase. Looker spans the gap in two areas of data warehousing and Business Intelligence. Looker's platform includes a modeling platform where the user codifies the view of the data using a SQL-like proprietary modeling language (LookML). It complements the modeling language with an end user visualization tool providing the self-service analytics portion. Source Primarily, Looker will help Google Cloud become a complete analytics solution that will help customers in ingesting data to visualizing results and integrating data and insights into their daily workflows. Looker + Google Cloud will be used for: Connecting, analyzing and visualizing data across Google Cloud, Azure, AWS, on-premise databases or ISV SaaS applications Operationalizing BI for everyone with powerful data modeling Augmenting business intelligence from Looker with artificial intelligence from Google Cloud Creating collaborative, data-driven applications for industries with interactive data visualization and machine learning Source Implications of Google + Locker Google and Looker already have a strong existing partnership and 350 common customers (such as Buzzfeed, Hearst, King, Sunrun, WPP Essence, and Yahoo!) and this acquisition will only strength it. “We have many common customers we’ve worked with. One of the great things about this acquisition is that the two companies have known each other for a long time, we share very common culture,” Kurian said in a blog. This is also a significant move by Google to gain market share from Amazon Web Services, which reported $7.7 billion in revenue for the last quarter. Google Cloud has been trailing behind Amazon and Microsoft in the cloud-computing market. Looker’s  acquisition will hopefully make its service more attractive to corporations. Looker’s CEO Frank Bien commented on the partnership as a chance to gain the scale of the Google cloud platform. “What we’re really leveraging here, and I think the synergy with Google Cloud, is that this data infrastructure revolution and what really emerged out of the Big Data trend was very fast, scalable — and now in the cloud — easy to deploy data infrastructure,” he said. What is intriguing is Google’s timing and all-cash payment of this buyout. FCC, DOJ, and Congress are currently looking at bringing potential antitrust on Google and other big tech. According to widespread media reports the US Department of Justice is readying to investigate into Google. It has been reported that the probe would examine whether the tech giant broke antitrust law in the operation of its online and advertisement businesses. According to Paul Gallant, a tech analyst with Cowen who focuses on regulatory issues, “A few years ago, this deal would have been waved through without much scrutiny. We’re in a different world today, and there might well be some buyer’s remorse from regulators on prior tech deals like this.” Public reaction to this accusation has been mixed. While some are happy: https://twitter.com/robgo/status/1136628768968192001 https://twitter.com/holgermu/status/1136639110892810241 Others remain dubious: "With Looker out of the way, the question turns to 'What else is on Google's cloud shopping list?," said Aaron Kessler, a Rayond James analyst in a report. "While the breadth of public cloud makes it hard to list specific targets, vertical specific solutions appear to be a strategic priority for Mr. Kurian." There are also questions on if Google will limit Looker to BigQuery, or at least get the newest features first. https://twitter.com/DanVesset/status/1136672725060243457 Then, there is the issue of whether Google will limit which clouds Looker can be run on. Although the company said, they will continue to support Looker’s multi-cloud strategy and will expand support for multiple analytics tools and data sources to provide customers choice.  Google Cloud will also continue to expand Looker’s investments in product development, go-to-market, and customer success capabilities. Google is also known for killing off its own products and also undermining some of its acquisition. With NEST for example, they said that it will be integrated with Google assistant. The decision was reversed only after a massive public backlash. Looker can also be one such acquisition, which may eventually merge with Google Analytics, Google’s proprietary Web analytics service. The deal expected to close later this year, albeit subject to regulatory approval. Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google and Binomial come together to open-source Basis Universal Texture Format Ian Lance Taylor, Golang team member, adds another perspective to Go being Google’s language
Read more
  • 0
  • 0
  • 22746

article-image-docker-announces-docker-desktop-enterprise
Savia Lobo
05 Dec 2018
3 min read
Save for later

Docker announces Docker Desktop Enterprise

Savia Lobo
05 Dec 2018
3 min read
Yesterday, at DockerCon Europe 2018, the Docker community announced the Docker Desktop Enterprise, an easy, fast, and a secure way to build production-ready containerized applications. Docker Desktop Enterprise Docker Desktop Enterprise is a new addition to Docker’s desktop product portfolio, which currently includes the free Docker Desktop Community products for MacOS and Windows. The Docker Desktop Enterprise version enables developers to work with the frameworks and languages they are comfortable with. It will also assist IT teams to safely configure, deploy, and manage development environments while adhering to corporate standards practices. Hence the enterprise version enables organizations to quickly move containerized applications from development to production and reduce their time to market. Features of Docker Desktop Enterprise Enterprise Manageability With Docker Desktop Enterprise, IT teams and application architects can present developers with application templates designed specifically for their team, to bootstrap and standardize the development process and provide a consistent environment all the way to production. For the IT team, the Docker Desktop Enterprise is packaged as standard MSI (Win) and PKG (Mac) distribution files. These files work with existing endpoint management tools with lockable settings via policy files. This edition also provides developers with ready to code, customized and approved application templates. Enterprise Deployment & Configuration Packaging IT desktop admins can deploy and manage Docker Desktop Enterprise across distributed developer teams with their preferred endpoint management tools using standard MSI and PKG files. Desktop administrators can also enable or disable particular settings within Docker Desktop Enterprise to meet corporate standards and provide the best developer experience. Application architects provide developers with trusted, customized application templates through the Application Designer interface in Docker Desktop Enterprise, helping to improve reliability and security by ensuring developers start from approved designs. Increase Developer Productivity and Ship Production-ready Containerized Applications Developers can quickly use company-provided application templates that instantly replicates production-approved application configurations on the local desktop by using the configurable version packs. With these version packs, developers can now synchronize desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. No Docker CLI commands are required to get started with Configurable Version Packs. Developers can also use the Application Designer interface, template-based workflows for creating containerized applications. If one has never launched a container before, the Application Designer interface provides the foundational container artifacts and user’s organization’s skeleton code to help users get started with containers in minutes. Read more about Docker Desktop Enterprise here. Gremlin makes chaos engineering with Docker easier with new container discovery feature Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login Zeit releases Serverless Docker in beta  
Read more
  • 0
  • 0
  • 22734
article-image-redox-os-will-soon-permanently-run-rustc-the-compiler-for-the-rust-programming-language-says-redox-creator-jeremy-soller
Vincy Davis
29 Nov 2019
4 min read
Save for later

Redox OS will soon permanently run rustc, the compiler for the Rust programming language, says Redox creator Jeremy Soller

Vincy Davis
29 Nov 2019
4 min read
Two days ago, Jeremy Soller, the Redox OS BDFL (Benevolent dictator for life) shared recent developments in Redox which is a Unix-like operating system written in Rust. The Redox OS team is pretty close to running rustc, the compiler for the Rust programming language on Redox. However, dynamic libraries are a remaining area that needs to be improved. https://twitter.com/redox_os/status/1199883423797481473 Redox is a Unix-like Operating System written in Rust, aiming to bring the innovations of Rust to a modern microkernel and full set of applications. In March this year, Redox OS 0.50 was released with support for Cairo, Pixman, and other libraries and packages. Ongoing developments in Redox OS Soller says that he has been running the Redox OS on a System76 Galago Pro (galp3-c) along with the System76 Open Firmware and has found the work satisfactory till now. “My work on real hardware has improved drivers and services, added HiDPI support to a number of applications, and spawned the creation of new projects such as pkgar to make it easier to install Redox from a live disk,” quotes Soller in the official Redox OS news page. Furthermore, he notified users that Redox has also become easier to cross-compile since the redoxer tool can now build, run, and test. It can also automatically manage a Redox toolchain and run executables for Redox inside of a container on demand. However, “compilation of Rust binaries on Redox OS”, is one of the long-standing issues in Redox OS, that has garnered much attention for the longest time. According to Soller, through the excellent work done by ids1024, a member of the GSoC Project, Readox OS had almost achieved self-hosting. Later, the creation of the relibc (a C library written in Rust) library and the subsequent work done by the contributors of this project led to the development of the POSIX C compatibility library. This gave rise to a significant increase in the number of available packages. With a large number of Rust crates suddenly gaining Redox OS support, “it seemed that as though the dream of self-hosting would soon be reality”, however, after finding some errors in relibc, Soller realized, “rustc is no longer capable of running statically linked!”  Read More: Rust 1.39 releases with stable version of async-await syntax, better ergonomics for match guards, attributes on function parameters, and more Finally, the team shifted its focus to relibc’s ld_so which provides dynamic linking support for executables. However, this has caused a temporary halt to porting rustc to Redox OS. Building Redox OS on Redox OS is one of the highest priorities of the Redox OS project. Soller has assured its users that Rustc is a few months away from being run permanently. He also adds that with Redox OS being a microkernel, it is possible that even the driver level could be recompiled and respawned without downtime, which will make the operating system exceedingly fast to develop. In the coming months, he will be working on increasing the efficiency of porting more software and tackling more hardware support issues. Eventually, Soller hopes that he will be able to successfully develop Redox OS which would be a fully self-hosted, microkernel operating system written in Rust. Users are excited about the new developments in Redox OS and have thanked Soller for it. One Redditor commented, “I cannot tell you how excited I am to see the development of an operating system with greater safety guarantees and how much I wish to dual boot with it when it is stable enough to use daily.” Another Redditor says, “This is great! Love seeing updates to this project 👍” https://twitter.com/flukejones/status/1200225781760196609 Head over to the official Redox OS news page for more details. AWS will be sponsoring the Rust Project A Cargo vulnerability in Rust 1.25 and prior makes it ignore the package key and download a wrong dependency Rust 1.38 releases with pipelined compilation for better parallelism while building a multi-crate project Homebrew 2.2 releases with support for macOS Catalina ActiveState adds thousands of curated Python packages to its platform
Read more
  • 0
  • 0
  • 22683

article-image-google-chrome-announces-an-update-on-its-autoplay-policy-and-its-existing-youtube-video-annotations
Natasha Mathur
29 Nov 2018
4 min read
Save for later

Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations

Natasha Mathur
29 Nov 2018
4 min read
Google Chrome team finally announced the release date for its Autoplay Policy, earlier this week. The policy had been delayed when it was released with the Chrome 66 stable release, back in May this year. The latest policy change is scheduled to come out along with Chrome 71, in the upcoming month. The Autoplay policy imposes restrictions that prevent videos and audios from autoplaying in the web browser. For websites that want to be able to autoplay their content, the new policy change will prevent playback by default. For most of the sites, playback will be resumed but a small code adjustment will be required in other cases to resume the audio. Additionally, Google has added a new approach to the policy that includes tracking users' past behavior with the sites that have autoplay enabled. So in case, if a user regularly lets an audio play for more than 7 seconds on a website, the autoplay gets enabled for that website. This is done with the help of a “Media Engagement Index” (MEI) i.e. an index stored locally per Chrome profile on a device. MEI tracks the number of visits to a site that includes audio playback of more than 7 seconds long. Each website gets a score between zero and one in MEI, where higher scores indicate that the user doesn’t mind audio playing on that website. For new user profiles or if a user clears their browsing data, a pre-seed list based on anonymized user aggregated MEI scores is used to track which websites can autoplay. The pre-seeded site list is algorithmically generated and only sites with enough users permitting autoplay on that site are added to the list. “We believe by learning from the user – and anticipating their intention on a per website basis – we can create the best user experience. If users tend to let content play from a website, we will autoplay content from that site in the future. Conversely, if users tend to stop autoplay content from a given website, we will prevent autoplay for that content by default”, mentions the Google team. The reason behind the delay The autoplay policy had been delayed by Google after receiving feedback from the Web Audio developer community, especially the web game developer and WebRTC developers. As per the feedback, the autoplay change was affecting many web games and audio experiences, especially on the sites that had not been updated for the change. Delaying the policy rollout gave web game developers enough time to update their websites. Moreover, Google also explored ways to reduce the negative impact of audio play policy on websites with audio enabled. Following this, Google has made an adjustment to its implementation of Web Audio to reduce the number of websites that had been originally impacted. New adjustments made for the developers As per new adjustments by Google in the autoplay policy, audio will get resumed automatically in case the user has interacted with a page and when the start() method of a source node is called. Source node represents individual audio snippets that most games play. One such example is that of a sound that gets played when a player collects a coin or the background music that plays in a particular stage within a game. Game developers call the start() function on source nodes more often than not in cases whenever any of these sounds are necessary for the game. These changes will enable the autoplay in most web games when the user starts playing the game. Google team has also introduced a mechanism for users that allows them to disable the autoplay policy for cases where the automatic learning doesn’t work as expected. Along with the new autoplay policy update,  Google will also stop showing existing annotations on the YouTube videos to viewers starting from January 15, 2019. All the other existing annotations will be removed. “We always put our users first but we also don’t want to let down the web development community. We believe that with our adjustments to the implementation of the policy, and the additional time we provided for web audio developers to update their code, that we will achieve this balance with Chrome 71”, says the Google team. For more information, check out Google’s official blog post. “ChromeOS is ready for web development” – A talk by Dan Dascalescu at the Chrome Web Summit 2018 Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native Meet Carlo, a web rendering surface for Node applications by the Google Chrome team
Read more
  • 0
  • 0
  • 22679
Modal Close icon
Modal Close icon