Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-quora-hacked-almost-a-100-million-users-data-compromised
Melisha Dsouza
04 Dec 2018
2 min read
Save for later

Quora Hacked: Almost a 100 Million users’ data compromised!

Melisha Dsouza
04 Dec 2018
2 min read
Yesterday, Quora announced that one of their systems was hacked and approximately 100 million user's data has been exposed to an unauthorized third-party. The breach was discovered on 30th November, after which the team immediately notified law enforcement and hired a digital forensics and security consulting company to uncover details of the attack. Quora is a strongly knit community of experts and intellectuals that is estimated to have almost 700 million visits per month and is the 95th largest site in the world. Adam D’Angelo, CEO of Quora states that for approximately 100 million Quora users, the following information may have been compromised: Account information such as name, email address, encrypted (hashed) password, data imported from linked networks when authorized by users Public content and actions, including questions, answers, comments, and upvotes Non-public content and actions, like answer requests, downvotes, and direct messages Quora claims that users who post questions and answers anonymously are safe as the site does not store the identities of people who post anonymous content. Quora has started notifying users whose data has been compromised, via email. They are also logging out all Quora users who may have been affected. Users that use a password as their authentication method, Quora will be invalidating their passwords. Quora has also advised users to head over to their help center for answers to more specific questions related to the breach. The breach comes right after the Marriott International hotel group breach that impacted half a billion users. Quora concludes that “The investigation is still ongoing, we have already taken steps to contain the incident, and our efforts to protect our users and prevent this type of incident from happening in the future are our top priority as a company.” Head over to Quora’s official site to know more about this news. A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News Uber fined by British ICO and Dutch DPA for nearly $1.2m over a data breach from 2016 Use TensorFlow and NLP to detect duplicate Quora questions [Tutorial]
Read more
  • 0
  • 0
  • 17735

article-image-nvidia-open-sources-its-game-physics-simulation-engine-physx-and-unveils-physx-sdk-4-0
Natasha Mathur
04 Dec 2018
2 min read
Save for later

NVIDIA open sources its game physics simulation engine, PhysX, and unveils PhysX SDK 4.0

Natasha Mathur
04 Dec 2018
2 min read
NVIDIA team unveiled PhysX SDK 4.0, yesterday, and also announced that it's making its popular real-time physics simulation engine, PhysX, available as open source under the simple BSD-3 license. “We’re doing this because physics simulation — a long key to immersive games and entertainment — turns out to be more important than we ever thought. PhysX will now be the only free, open-source physics solution that takes advantage of GPU acceleration and can handle large virtual environments”, says the NVIDIA team. NVIDIA had designed PhysX specifically for the purpose of hardware acceleration using powerful processors that comprise hundreds of processing cores. This design offers a dramatic boost in the physics processing power, which in turn, takes the gaming experience to a whole new level, offering more rich, and immersive physical gaming environments. The new PhysX SDK 4.0 is a scalable, open source, and multi-platform game physics solution that offers support to a wide range of devices, ranging from smartphones to high-end multicore CPUs and GPUs. PhysX 4.0 SDK has been upgraded to offer industrial-grade simulation quality at game simulation levels. PhysX 4.0 comes with Temporal Gauss-Seidel Solver (TGS), that is capable of adjusting the constraints within games with each iteration, depending on the bodies’ relative motion. Other than that, the overall stability has been improved and now allows for new filtering rules for kinematics and statics. Some of the major features of PhysX SDK 4.0, includes effective memory usage management, support offered for different measurement units and scales, multiple broad-phase, convex-mesh, triangle mesh, and primitive shape collision detection algorithms. PhysX SDK 4.0 will be made available on December 20, 2018. Public reaction to the news is largely positive as PhysX was earlier available for commercial use for free, but now that its available as open source, people can interact deeply with the physics engine, modifying it as per their needs at absolutely no cost. https://twitter.com/puradawid/status/1069614540671909888 https://twitter.com/tauke/status/1069603803463184384 For more information, check out the official NVIDIA blog post. NVIDIA open sources its material definition language, MDL SDK NVIDIA unveils a new Turing architecture: “The world’s first ray tracing GPU” BlazingDB announces BlazingSQL , a GPU SQL Engine for NVIDIA’s open source RAPIDS
Read more
  • 0
  • 0
  • 15141

article-image-microsoft-open-sources-seal-simple-encrypted-arithmetic-library-3-1-0-with-aims-to-standardize-homomorphic-encryption
Bhagyashree R
04 Dec 2018
3 min read
Save for later

Microsoft open sources (SEAL) Simple Encrypted Arithmetic Library 3.1.0, with aims to standardize homomorphic encryption

Bhagyashree R
04 Dec 2018
3 min read
Yesterday, Microsoft with the goal to standardize homomorphic encryption, open sourced Microsoft Simple Encrypted Arithmetic Library (Microsoft SEAL) under the MIT License. It is an easy-to-use homomorphic encryption library developed by researchers in the Cryptography Research group at Microsoft. Microsoft SEAL was first released in 2015 to provide “a well-engineered and documented homomorphic encryption library, free of external dependencies, that would be easy for both cryptography experts and novice practitioners to use.” Industries have moved over to the cloud for data storage because it is convenient. But this does raise some privacy concerns. In order to get practical guidance on our decision making that cloud and machine learning provide, we need to share our personal information. The traditional encryption schemes do not allow running any computation on encrypted data. So we need to choose between storing our data encrypted in the cloud and downloading it to perform any useful operations or providing the decryption key to service providers which risks our privacy. But these concerns are solved by the homomorphic encryption approach. Homomorphic encryption is a cryptographic mechanism in which specific types of mathematical operations are carried out on the ciphertext, instead of on the actual data. This mechanism then generates an encrypted result, which on decryption, matches the result of operations performed on the plaintexts. In a nutshell, it produces the same output that will be obtained from decrypting the operated cipher text as from simply operating on the initial plain text. Some of the key advantages of using Microsoft SEAL are: it does not have any external dependencies and since it is written in standard C++, compiling it in many different environments is easy. At its core, it makes use of two encryption schemes: the Brakerski/Fan-Vercauteren (BFV) scheme and the Cheon-Kim-Kim-Song (CKKS) scheme. Along with the license change, the team have also added few updates in the latest release SEAL 3.1.0, some of which are listed here: Support for 32-bit platforms Google Test framework for unit tests To configure SEAL on Windows, Visual Studio now uses CMake Generating Galois keys for specific rotations is easier New EncryptionParameterQualifiers flag is added which indicates HomomorphicEncryption.org security standard compliance for parameters Now, secret key data is cleared automatically from memory by destructors of SecretKey, KeyGenerator, and Decryptor To read more in detail, check out Microsoft’s official announcement. Microsoft becomes the world’s most valuable public company, moves ahead of Apple Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019 4 Encryption options for your SQL Server
Read more
  • 0
  • 0
  • 12223

article-image-neurips-2018-deep-learning-experts-discuss-how-to-build-adversarially-robust-machine-learning-models
Melisha Dsouza
04 Dec 2018
5 min read
Save for later

NeurIPS 2018: Deep learning experts discuss how to build adversarially robust machine learning models

Melisha Dsouza
04 Dec 2018
5 min read
The NeurIPS Conference 2018 being held in Montreal, Canada this week from 2nd December to 8th December will feature a series of tutorials, releases and announcements. The conference, previously known as NIPS, underwent a re-branding of its name (after much debate) as some members of the community found the acronym as “sexist”, pointing out that it is offensive towards women. “The Adversarial Robustness: Theory and Practice” is a tutorial that was presented at NIPS 2018 yesterday. The tutorial was delivered by J. Zico Kolter, a professor at Carnegie Mellon and chief Scientist of Bosch AI center and Aleksander Madry from MIT. In this tutorial, they explored the importance of building adversarially robust machine learning models as well as the challenges one will encounter while deploying the same.  Adversarial Robustness is a library dedicated to adversarial machine learning. It allows rapid crafting and analysis of attacks and defense methods for machine learning models. Alexander opened the talk by highlighting some of the challenges faced while deploying Machine Learning in the real world.  Even though machine learning has had a success story so far, is ML truly ready for real- world deployment? Also, can we truly rely on machine learning? These questions arise because developers don’t fully understand how machine learning interacts with other parts of the system, and this can lead to plenty of adversaries. Safety is still very much an issue while deploying ML. The tutorial tackles questions related to adversarial robustness and gives plenty of examples for developers to understand the concept and deploy ML models that are more adversarially robust. The measure of machine learning performance is the fraction of mistakes made during the testing phase of the algorithm. However, Alexander explains that in reality, the distributions we use machine learning on are NOT the ones we train it on. These assumptions are sometimes misleading. The key implication is that machine learning predictions are most of the time accurate, but they can also can turn out to be brittle. For example, the slightest of noise can alter an output and make the wrong prediction which accounts for brittleness of the ML algorithms. Besides, rotation and translation can fool state of the art vision models. Brittleness and other issues in Machine Learning Brittleness hampers the following domains in machine learning: Security: When a machine learning system has loopholes, a hacker can manipulate it leading to a system/data breach. An example of this would be, adding external entities to manipulate an Object recognition system. Safety and Reliability: Alexander gives an example of Tesla's self-driving cars, where the AI sometimes drives the car over a divider and the driver has to take over. In addition, the system does not report this as an error. ML alignment: Developers need to understand the “failure modes” of machine learning to understand how they work and succeed. Adversarial issues occur in the inference models. The training phases also involves a risk called ‘Data poisoning’. The goal of Data poisoning is maintaining training accuracy but hampering generalization. Machine learning is always in need of a huge amount of data to function and train on. To fulfill this need, sometimes the system works on data that cannot be trusted. This occurs mostly in classic Machine learning scenarios and less in Deep Learning. In deep learning, data poisoning causes training accuracy but hampers classification of specific inputs. It can also plant an undetectable backdoor in the system that can give it almost total control over the model. The final issue arises in Deployment.  During the deployment stage, restricted access is given to a user- for example, just access to the input-output of a model can also lead to Black box attacks. Alexander’s Commandments of Secure/Safe ML 1. Do not train on data you do not trust 2. Do not let anyone use the model or observe its outputs unless you completely trust them 3. Do not fully trust the predictions of your model (because of adversarial examples) Developers need to re-think the tools they use in machine learning to understand if they are robust enough to stress test the system. For Alexander, we need to treat training as an optimization problem. The aim is to find parameters that minimize loss on the training sample. Zico then builds on the principles put forward by Alexander, showing a number of different adversarial examples in action. This includes something called convex relaxations, which help to train and find the most optimal models for a given training set. Takeaways from the Tutorial After understanding how to implement adversarily robust ML models, developers can now ask themselves how does adversarial robust ML differ from standard ML.  That being said, adversarial robustness comes at a cost. Optimization during training is difficult and models need to be larger. More training data might be required. We also might need to lose on standard measures of performance. However, adversarial robustness helps machine learning models become semantically meaningful. Head over to NeurIPS facebook page for the entire tutorial and other sessions happening at the conference this week. Day 1 at the Amazon re: Invent conference – AWS RoboMaker, Fully Managed SFTP Service for Amazon S3, and much more! Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements! Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019      
Read more
  • 0
  • 0
  • 13402

article-image-kubernetes-1-13-released-with-new-features-and-fixes-to-a-major-security-flaw
Prasad Ramesh
04 Dec 2018
3 min read
Save for later

Kubernetes 1.13 released with new features and fixes to a major security flaw

Prasad Ramesh
04 Dec 2018
3 min read
A privilege escalation flaw in Kubernetes was discussed on GitHub last week. Following that, Red Hat released patches for the same. Yesterday Kubernetes 1.13 was also released. The security flaw A recent GitHub issue outlines the issue. Named as CVE-2018-1002105, this issue allowed unauthorized users to craft special requests. This let the unauthorized users establish a connection to a backend server via the Kubernetes API. This let sending arbitrary requests over the same connection directly to the backend. Following this, IBM owned Red Hat released patches for this vulnerability yesterday. All Kubernetes based products are affected by this vulnerability. It has now been patched and as the impact is classified as critical by Red Hat, a version upgrade is strongly recommended if you’re running an affected product. You can find more details at the Red Hat website. Let’s now look at the new features in Kubernetes 1.13 other than the security patch. kubeadm is GA in Kubernetes 1.13 kubeadm is an essential tool for managing the lifecycle of a cluster, right from creation to configuration to upgrade. kubeadm is now officially GA. This tool handles bootstrapping of production clusters on current hardware and configuration of core Kubernetes components. With the GA release, advanced features are available around pluggability and configurability. kubeadm is aimed to be a toolbox for both admins and automated, higher-level systems. Container Storage Interface (CSI) is also GA The Container Storage Interface (CSI) is generally available in Kubernetes 1.13. It was introduced as alpha in Kubernetes 1.9 and beta in Kubernetes 1.10. CSI makes the Kubernetes volume layer truly extensible. It allows third-party storage providers to write plugins that interoperate with Kubernetes without having to modify the core code. CoreDNS replaces Kube-dns as the default DNS Server CoreDNS is replacing Kube-dns to be the default DNS server for Kubernetes. CoreDNS is a general-purpose, authoritative DNS server. It provides an extensible backwards-compatible integration with Kubernetes. CoreDNS is a single executable and a single process. It supports flexible use cases by creating custom DNS entries and is written in Go making it memory-safe. KubeDNS will be supported for at least one more release. Other than these there are also other feature updates like support for 3rd party monitoring, and more features graduating to stable and beta. For more details, on the Kubernetes release, visit the Kubernetes website. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Platform9 announces a new release of Fission.io, the open source, Kubernetes-native Serverless framework Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits
Read more
  • 0
  • 0
  • 12253

article-image-ipython-7-2-0-is-out
Savia Lobo
04 Dec 2018
2 min read
Save for later

IPython 7.2.0 is out!

Savia Lobo
04 Dec 2018
2 min read
Last week, the IPython community announced its latest release, IPython 7.2, which is available on PyPI and will be soon available on Conda. This version includes some improvements, minor bug fixes, and also new configuration options. Users can update to IPython 7.2 by using the following command: pip install ipython --upgrade Improvements in IPython 7.2 Ability to show subcases while using pinfo and other utilities In this update, IPython will now list the first 10 subclasses whenever a ‘? / ??’ is used on a class. OSMagics.cd_force_quiet configuration option Users can now set the OSMagics.cd_force_quiet option to force the %cd magic to behave as if -q was passed. Here is the command: In [1]: cd / / In [2]: %config OSMagics.cd_force_quiet = True In [3]: cd /tmp In [4]: Current vi mode can now be configured To control this feature, users need to set the TerminalInteractiveShell.prompt_includes_vi_mode to a boolean value (default: True). Other improvements and bug fixes Fixed a bug preventing PySide2 GUI integration from working Users can now run CI on Mac OS The IPython ‘Demo’ mode has been fixed Fixed the %run magic with path in name This update has an added CWD to sys.path after stdlib The signatures (especially long ones) now have a better rendering Users can re-enable jedi by default if it’s installed This update has a new minimal exception reporting mode, which is mostly useful for educational purpose There are still some outstanding bugs that will be fixed in the next release, which the community plans to release before the end of the year. To know more about this release in detail, head over to IPython’s documentation. IPython 7.0 releases with AsyncIO Integration and new Async libraries Make Your Presentation with IPython How to connect your Vim editor to IPython
Read more
  • 0
  • 0
  • 16490
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-reportedly-ditching-edgehtml-for-chromium-in-the-windows-10-default-browser
Prasad Ramesh
04 Dec 2018
3 min read
Save for later

Microsoft reportedly ditching EdgeHTML for Chromium in the Windows 10 default browser

Prasad Ramesh
04 Dec 2018
3 min read
According to a story by Windows Central, Microsoft is working on a Chromium based web browser. This will likely be a replacement to their current web browser on Windows 10, Microsoft Edge. Edge never took off the Edge Microsoft Edge was launched in 2015 built from scratch with EdgeHTML. Microsoft tried to get it into adoption by making Windows 10 update free for a limited time. However, the browser was not well received at the early stage itself due to a large number of issues. Since then it has not been very stable driving developers and users away from it. Due to this, Microsoft is reportedly abandoning Edge and the EdgeHTML framework for Chromium. Chromium is a rendering engine used in Google Chrome. The new browser’s name is codenamed Anaheim and will be replacing Edge as the default browser in Windows. It is not known if Edge will be renamed and if the UI will be different. But EdgeHTML will no longer be used in Windows 10’s default browser. Using the Chromium engine instead Using Chromium means that the websites on Windows 10 default browser will load as they do on Google Chrome. Default browser users will no longer have to face the loading and connectivity issues that plagued EdgeHTML based Microsoft Edge. For smartphones, nothing will change much as Edge on smartphones already use platform specific engines. Recently, 9to5Google reported that Microsoft engineers are committing code to the Chromium project. This would suggest that they are working on their own browser by using Chromium instead of EdgeHTML. The browser may likely be out next year. Public reactions A comment on hacker news reads: “You[web developers] don't test your work in Edge and because you tell all your friends and family to use Chrome instead of Edge. So stop complaining about monoculture. Many of you helped create it.” Some sarcasm thrown in another comment: “I test my app in Edge, every time a new version is released. When it inevitably fails, I shake my head in disbelief that Microsoft still hasn't paid a dev to spend a couple months fixing their IndexedDB implementation, which has been incomplete since the IE days. Can't expect a small rag-tag group like Microsoft to compete with a rich corporate behemoth like Mozilla, I guess :)” Another comment says: “How can I test Edge when Microsoft don't release it for Mac and Linux? A browser for a single OS? Talk about monoculture.” https://twitter.com/headinthebox/status/1069796773017710592 Another Tweet suggests that this move towards Chromium is about ElectronJS stronghold over app development and not about Microsoft wanting a Chrome like browser experience on its default browser: https://twitter.com/SwiftOnSecurity/status/1069776335336292352 After years of Internet Explorer being ridiculed and Edge not being the success they hoped for, it would be nice to see the Windows default browser catching up with the likes of Chrome and Firefox. Introducing Howler.js, a Javascript audio library with full cross-browser support Firefox Reality 1.0, a browser for mixed reality, is now available on Viveport, Oculus, and Daydream Microsoft becomes the world’s most valuable public company, moves ahead of Apple
Read more
  • 0
  • 0
  • 11865

article-image-deepminds-alphafold-is-successful-in-predicting-the-3d-structure-of-a-protein-making-major-inroads-for-ai-use-in-healthcare
Sugandha Lahoti
04 Dec 2018
3 min read
Save for later

Deepmind’s AlphaFold is successful in predicting the 3D structure of a protein making major inroads for AI use in healthcare

Sugandha Lahoti
04 Dec 2018
3 min read
Google’s DeepMind is turning its attention to using AI for science and healthcare. This statement is strengthened by the fact that last month, Google made major inroads into healthcare tech by absorbing DeepMind Health. In August it’s AI was successful in spotting over 50 sight-threatening eye diseases. Now it has solved another tough science problem. At an international conference in Cancun on Sunday, Deepmind’s latest AI system AlphaFold won the Critical Assessment of Structure Prediction (CASP) competition. The CASP is held every two years, inviting participants to submit models to predict the 3D structure of a protein from the amino acid sequence. The ability to predict a protein’s shape is useful to scientists because it is fundamental to understanding its role within the body. It is also used for diagnosing and treating diseases such as Alzheimer’s, Parkinson’s, Huntington’s and cystic fibrosis. AlphaFold’s SUMZ score was 127.9 (the previous winner SUMZ score was 80.46), achieving what CASP called “unprecedented progress in the ability of computational methods to predict protein structure.” The second team, named Zhang, scored 107.6. How does Deepmind’s AlphaFold work AlphaFold’s team trained a neural network to predict a separate distribution of distances between every pair of residues in a protein. These probabilities were then combined into a score that estimates how accurate a proposed protein structure is. They also trained a separate neural network that uses all distances in aggregate to estimate how close the proposed structure is to the right answer. The scoring functions were used to search the protein landscape to find structures that matched their predictions. They used two distinct methods to construct predictions of full protein structures. The first method repeatedly replaced pieces of a protein structure with new protein fragments. They trained a generative neural network to invent new fragments to improve the score of the proposed protein structure. The second method optimized scores through gradient descent for building highly accurate structures. This technique was applied to entire protein chains rather than to pieces that must be folded separately before being assembled, reducing the complexity of the prediction process. DeepMind Founder and CEO Demis Hassabis celebrated the victory in a tweet. https://twitter.com/demishassabis/status/1069411081603481600 Google CEO Sunder Pichai was also excited about this development on how AI can be used for scientific discovery. https://twitter.com/sundarpichai/status/1069450462284267520 NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale. Google makes major inroads into healthcare tech by absorbing DeepMind Health A new episodic memory-based curiosity model to solve procrastination in RL agents by Google Brain, DeepMind and ETH Zurich
Read more
  • 0
  • 0
  • 22812

article-image-github-acquires-spectrum-a-community-centric-conversational-platform
Savia Lobo
03 Dec 2018
2 min read
Save for later

GitHub acquires Spectrum, a community-centric conversational platform

Savia Lobo
03 Dec 2018
2 min read
Last week, Bryn Jackson, CEO of Spectrum, a real-time community-centered conversational platform, announced that the project is now acquired by GitHub. Bryn, along with Brian Lovin, and Max Stoiber founded the Spectrum community platform in February 2017. This community is a place to ask questions, request features, report bugs, and also chat with the Spectrum team for queries. In a blogpost Bryn wrote, “After releasing an early prototype, people told us they also wanted to use it for their communities, so we decided to go all-in and build an open, inclusive home for developer and designer communities. Since officially launching the platform late last year, Spectrum has become home to almost 5,000 communities!” What will Spectrum bring to GitHub communities? By joining GitHub, Spectrum aims to align to GitHub’s goals of making developer lives easier and of fostering a strong community across the globe. For communities across GitHub, Spectrum will provide: A space for different communities across the internet. Free access to its full suite of features - including unlimited moderators, private communities and channels, and community analytics. A deeper integration with GitHub Spectrum has also opened a pull request to add some of GitHub’s policies to Spectrum’s Privacy Policy, which will be merged this week. Though many users have not heard about Spectrum, they are positively reacting towards its acquisition by GitHub. Many users have also compared it with other platforms such as Slack, Discord, and Gitter. To know more about this news, read Bryn Jackson’s blog post. GitHub Octoverse: The top programming languages of 2018 GitHub has passed an incredible 100 million repositories Github now allows repository owners to delete an issue: curse or a boon?
Read more
  • 0
  • 0
  • 17547

article-image-unity-introduces-guiding-principles-for-ethical-ai-to-promote-responsible-use-of-ai
Natasha Mathur
03 Dec 2018
3 min read
Save for later

Unity introduces guiding Principles for ethical AI to promote responsible use of AI

Natasha Mathur
03 Dec 2018
3 min read
The Unity team announced guidelines to Ethical AI, last week, to promote more responsible use of Artificial Intelligence for its developers, community, and the company. Unity’s guide to Ethical AI comprises six guiding AI principles. Unity’s six guiding AI principles Be unbiased This principle focuses on designing AI tools in a way that complements the human experience in a positive way. To achieve this, it is important to take into consideration all types of diverse human experiences that can, in turn, lead to AI complementing experiences for everybody. Be Accountable This principle puts an emphasis on keeping in mind the potential negative consequences, risks, and dangers of the AI tools while building them. It focuses on assessing the factors that might cause “direct or indirect harm” so that they can be avoided. This ensures accountability. Be fair This principle focuses on ensuring that the kind of AI tools developed does not interfere with “normal, functioning democratic systems of government”. So, the development of an AI tool that can lead to the suppression of human rights (such as free expression), as defined by the Universal Declaration, should be avoided. Be responsible This principle stresses the importance of developing products responsibly. It ensures that AI developers don’t take undue advantage of the vast capabilities of AI while building a product. Be Honest This principle focuses on building trust among the users of a technology by being clear and transparent about the product so that they can better understand its purpose. This, in turn, will lead to users making better and more informed decisions regarding the product. Be Trustworthy This principle emphasizes the importance of protecting the AI derived user data. “Guard the AI derived data as if it were handed to you by your customer directly in trust to only be used as directed under the other principles found in this guide” reads the Unity blog. “We expect to develop these principles more fully and to add to them over time as our community of developers, regulators, and partners continue to debate best practices in advancing this new technology. With this guide, we are committed to implementing the ethical use of AI across all aspects of our company’s interactions, development, and creation”, says the Unity team. For more information, check out the official Unity blog post. EPIC’s Public Voice Coalition announces Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018 Teaching AI ethics – Trick or Treat? SAP creates AI ethics guidelines and forms an advisory panel
Read more
  • 0
  • 0
  • 19126
article-image-librepcb-0-1-0-released-with-major-changes-in-library-editor-and-file-format
Amrata Joshi
03 Dec 2018
2 min read
Save for later

LibrePCB 0.1.0 released with major changes in library editor and file format

Amrata Joshi
03 Dec 2018
2 min read
Last week, the team at LibrePCB released LibrePCB 0.1.0., a free EDA (Electronic Design Automation) software used for developing printed circuit boards. Just three weeks ago, LibrePCB 0.1.0 RC2 was released with major changes in library manager, control panel, library editor, schematic editor and more. The key features of LibrePCB include, cross-platform (Unix/Linux, Mac OS X, Windows), all-in-one (project management, library/schematic/board editors) and intuitive, modern and easy-to-use graphical user interface. It also features powerful library designs and human-readable file formats. What’s new in LibrePCB 0.1.0 ? Library editor This new version saves library URL. LibrePCB 0.1.0 has come with improvements to saving of component property, schematic-only. File format stability Since this new release of LibrePCB  is a stable one, the file format is stable. The projects created with this version will be loadable with LibrePCB’s future releases. Users are comparing LibrePCB 0.1.0 with KiCad, a free open source EDA software for OSX, Linux and Windows, and they have questions as to which one is better. But many users think that LibrePcb 0.1.0 is better because the part libraries are managed well. Whereas, KiCad doesn’t have a coherent workflow for managing the part libraries. It is difficult to manage the parts like a schematic symbol, its footprint, its 3D model in KiCad. Read more about this news, in detail, on the LibrePCB blog. A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly How to secure your Raspberry Pi board [Tutorial] Nvidia unveils a new Turing architecture: “The world’s first ray tracing GPU”
Read more
  • 0
  • 0
  • 17766

article-image-uc-davis-students-bag-500k-award-and-the-2018-amazon-alexa-prize-for-creating-a-social-conversational-system-gunrock
Amrata Joshi
03 Dec 2018
3 min read
Save for later

UC Davis students bag $500k award and the 2018 Amazon Alexa prize for creating a social conversational system, Gunrock

Amrata Joshi
03 Dec 2018
3 min read
Last week, a team of students from the University of California, Davis, won the global 2018 Amazon Alexa prize and  $500,000 at the AWS re:Invent 2018 conference. They created a chatbot Gunrock that can communicate with humans on topics such as entertainment, sports, politics, technology, and fashion. The chatbot was named after the University’s mascot. Gunrock maintained an average of 9 minutes and 59 seconds of conversation in the final round. It scored 3.1 out of 5. The second prize went to the Team Alquist from the Czech Technical University in Prague scoring 2.6/5  and winning $100,000 prize money. This year, the finalists were announced on Twitch, live. The teams used Conversational Bot (CoBot) toolkit, Alexa Skills Kit, and the AWS cloud for creating socialbots for Alexa. The top teams were chosen on the basis of potential scientific contribution in the field of AI and research, the technical merit of approaches, the novelty of their ideas, and the team’s ability to execute against their plan. The finals were held at Amazon’s Seattle headquarters over two days in early November which involved three interactors who held conversations with the socialbots, and also academia experts and industry professionals who served as judges. The Amazon launched the Alexa prize in 2016 to overcome the challenge of building agents which can carry multi-turn open domain conversations. The objective of this competition is to build agents that can converse coherently and engage with humans for 20 minutes.Last year nearly 3 million Alexa U.S. customers logged around more than 162,000 hours of conversation with the 2017 Alexa prize bots. The Gunrock team programmed the chatbot using the conversational data from millions of Amazon Alexa users. The team of 11 students was led by Zhou Yu, an assistant professor in the Computer Science department. The most striking feature about the Gunrock bot is that it uses language disfluencies or pauses such as “hm” or “ah.” This makes Gunrock more human like and different from traditional bots. The team worked on a natural language understanding model for breaking down the dialogue into self-contained semantic units and  analyzing the language for determining the context. They further integrated structured knowledge bases such as Google knowledge into Gunrock. This helped the bot in handling a wide variety of user behaviors which includes topic switching and question answering. Some people on Hacker News are skeptical of this win. A user pointed out, “Only the Gunrock team had an industry professional, Chun-Yen C, maybe the team won because of this reason.” People are confused as to what benefit is Amazon getting out of such competitions. A user said, “Since each of the teams was given around US$250,000, is it sort of a paid work?” There are also questions raised on the idea of language disfluency. A user said, “ Google also recently faced a backlash over its feature in the Virtual Assistant.” As sentiments are something very natural and they might not sound convincing when coming from a bot. To know more about this news, check out Amazon’s blog post. Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements! Amazon announces the public preview of AWS App Mesh, a service mesh for microservices on AWS Introducing AWS DeepRacer, a self-driving race car, and Amazon’s autonomous racing league to help developers learn reinforcement learning in a fun way
Read more
  • 0
  • 0
  • 9864

article-image-reddit-takes-stands-against-the-eu-copyright-directives-greets-eu-redditors-with-warning-box
Natasha Mathur
03 Dec 2018
4 min read
Save for later

Reddit takes stands against the EU copyright directives; greets EU redditors with ‘warning box’

Natasha Mathur
03 Dec 2018
4 min read
The Reddit team has decided to take a stand against the EU copyright directive, as it announced last week that EU Reddit users will now be greeted with a “warning box”, on accessing Reddit via desktop. The warning box will provide users with information regarding the EU copyright directives (specifically article 11 and 13). It will also be referring to resources and support sites. This is Reddit’s attempt to make EU users more aware of the law’s potential impact on the free and open internet. This is not the first time Reddit has stood up against the controversial EU copyright law as it had published a post updating the users on EU copyright directives, 2 months back. “Article 13” talks about the “use of protected content by information society service providers storing and giving access to large amounts of works and other subject-matter uploaded by their users”. In a nutshell, any user-generated content, if found to be copyrighted on online platforms such as YouTube, Twitter, Facebook, Reddit, etc, would need to get censored by these platforms. “Article 11” talks about “Protection of press publications concerning digital uses”, under which sites would have to pay the publishers if a part of their work is being shared by these sites.   “Under the new Directive, activity that is core to Reddit, like sharing links to news articles, or the use of existing content for creative new purposes (r/photoshopbattles, anyone?) would suddenly become questionable under the law, and it is not clear right now that there are feasible mitigating actions that we could take while preserving core site functionality”, says the Reddit team. The Reddit team also argues that various similar kind of attempts made in the past in different countries within Europe had “actually harmed publishers and creators”. Furthermore, Reddit has come out with a number of suggestions, in partnership with Engine and Copia institute, for ways to improve both the proposals. Here are some of the fixes: Suggestions Article 11 Suggestions for Article 13 Clarification needed in detail about content requiring a license. There’s confusion regarding whether a single word would qualify for a license or a link. More information needed on what sites this proposal applies to. The current term “digital uses” is quite broad. For eg; if the target is news aggregators, then make that explicit. It should be made clear that this proposal is not applicable to individual users, but is meant only for large news collating sites.   Clarification should be made on what a “press publisher” is under the law. It could be interpreted to include all kinds of sites. Also, the fact that a press publisher does not include scientific journals and similar kind non-news-based publications, should be made clear. Clarification needed on what is meant by “appropriate and proportionate” as it currently doesn't provide any guidance to sites online and can be incorrectly interpreted, leading to litigation and abuse.   Must have clear and significant penalties in place for providing false reports of infringement. It should be the responsibility of the Copyright holders to provide information on platforms with specific identifying content, ownership details along with content information when determining infringing works. A “ fair use-like exception” should be implemented in the EU to legalize memes, remixes, and other everyday online culture.  “We hope that today’s action will drive the point home that there are grave problems with Articles 11 and 13 and.. that EU lawmakers will listen to those who use and understand the internet the most and reconsider these problematic articles. Protecting rights holders need not come at the cost of silencing European internet users”, says the Reddit team. GitHub updates developers and policymakers on EU copyright Directive at Brussels What the EU Copyright Directive means for developers – and what you can do YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law
Read more
  • 0
  • 0
  • 10758
article-image-neurips-2018-paper-deepmind-researchers-explore-autoregressive-discrete-autoencoders-adas-to-model-music-in-raw-audio-at-scale
Melisha Dsouza
03 Dec 2018
5 min read
Save for later

NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale

Melisha Dsouza
03 Dec 2018
5 min read
In the paper ‘The Challenge of realistic music generation: modelling raw audio at scale’, researchers from DeepMind have embarked on modelling music in the raw audio domain. They have explored autoregressive discrete autoencoders (ADAs) to enable autoregressive models to capture long-range correlations in waveforms. Autoregressive models are the best while generating raw audio waveforms of speech, but when applied to music, they are more biased towards capturing local signal structure at the expense of modelling long-range correlations. Since music exhibits structure at many different timescales, this issue is problematic; thereby making realistic music generation a challenging task. This paper will be presented in the 32nd Conference on Neural Information Processing Systems (NIPS 2018) to be held at Montréal, Canada this week. Challenges when music is symbolically represented Music has a complex structure by nature and is made up of waveforms that spans over different time periods and magnitudes. Therefore, modelling all of the temporal correlations in the sequence that arise from this structure is challenging. Most of the work in music generation has focused on symbolic representations. This method however has multiple limitations. Symbolic representations abstract away the idiosyncrasies of a particular performance, and these nuances abstracted away are often musically quite important, impacting a user’s enjoyment of music. The paper states an example of the precise timing, timbre and volume of the notes played by a musician do not correspond exactly to those written in a score. Symbolic representations are often tailored to particular instruments, reducing their generality, thereby leading to a lot of work being applied to existing modelling techniques to new instruments. Digital representations of audio waveforms retain all the musically relevant information. These models can be applied to recordings of any set of instruments. However, the task is challenging as compared to modelling symbolic representations. These generative models of waveforms capturing musical structure at many timescales requires high representational capacity, distributed effectively over the various musically-relevant timescales. Steps performed to address music generation in the raw audio domain The researchers use autoregressive models to model structure across roughly 400,000 timesteps, or about 25 seconds of audio sampled at 16 kHz. They demonstrate a computationally efficient method to enlarge their receptive fields using autoregressive discrete autoencoders (ADAs). They explore the domain of autoregressive models for this task, while they use the argmax autoencoder (AMAE) as an alternative to vector quantisation variational autoencoders (VQ-VAE). This autoencoder converges more reliably when trained on a challenging dataset. To model long-range structure in musical audio signals, the receptive fields (RFs) of AR models have to be enlarged. One way to do this is by providing a rich conditioning signal. The paper concentrates on this notion which turns an AR model into an autoencoder by attaching an encoder to learn a high-level conditioning signal directly from the data. Temporal downsampling operations can be inserted into the encoder to make this signal more coarse-grained than the original waveform. The resulting autoencoder uses its AR decoder to model any local structure that this compressed signal cannot capture. The researchers went on to compare the two techniques that can be used to model the raw audio: Vector quantisation variational autoencoders and the argmax autoencoder (AMAE). Vector quantisation variational autoencoders use vector quantisation (VQ): the queries are vectors in a d-dimensional space, and a codebook of k such vectors is learnt on the fly, together with the rest of the model parameters. The loss function is as follows: LV Q−V AE = − log p(x|qj) + (qj − [q])2 + β · ([qj] − q)2. However, the issue with VQ-VAEs when trained on challenging (i.e. high-entropy) datasets is that they often suffer from codebook collapse. At some point during training, some portion of the codebook may fall out of use and the model will no longer use the full capacity of the discrete bottleneck, leading to worse results and poor reconstructions. As an alternative to VQ-VAE method, the researchers have come up with a model called the argmax autoencoder (AMAE). This produces k-dimensional queries, and features a nonlinearity that ensures all outputs are on the (k 1)-simplex. The quantisation operation is then simply an argmax operation, which is equivalent to taking the nearest k-dimensional one-hot vector in the Euclidean sense. This projection onto the simplex limits the maximal quantization error. This makes the gradients that pass through it more accurate. To make sure the full capacity is used, an additional diversity loss term is added which encourages the model to use all outputs in equal measure. This loss can be computed using batch statistics, by averaging all queries q (before quantisation) across the batch and time axes, and encouraging the resulting vector q¯ to resemble a uniform distribution. Results of the experiment This is what the researchers achieved: Addressed the challenge of music generation in the raw audio domain autoregressive models and extending their receptive fields in a computationally efficient manner. Introduced the argmax autoencoder (AMAE), an alternative to VQ-VAE which shows improved stability for the task. Using separately trained autoregressive models at different levels of abstraction captures long-range correlations in audio signals across tens of seconds, corresponding to 100,000s of timesteps, at the cost of some signal fidelity. You can refer to the paper for a comparison of results obtained across various autoencoders and for more insights on this topic. Exploring Deep Learning Architectures [Tutorial] Implementing Autoencoders using H2O What are generative adversarial networks (GANs) and how do they work? [Video]
Read more
  • 0
  • 0
  • 14602

article-image-anti-paywall-add-on-is-no-longer-available-on-the-mozilla-website
Sugandha Lahoti
03 Dec 2018
4 min read
Save for later

Anti-paywall add-on is no longer available on the Mozilla website

Sugandha Lahoti
03 Dec 2018
4 min read
Anti-paywall add-on has been deprecated from the Mozilla website. The author of that add-on, Florent Daigniere confirmed that it has been removed from both Chrome and Mozilla. “This was done because the add-on violated the Firefox Add-on Distribution Agreement and the Conditions of Use,” Daigniere wrote. “It appears to be designed and promoted to allow users to circumvent paywalls, which is illegal”. Last year, Daigniere released the anti-paywall browser extension that maximizes the chances of bypassing paywalls. On asking Mozilla about why this add-on was deprecated, he got the reply: "There are various laws in the US that prohibit tools for circumventing access controls like a paywall. Both Section 1201 of the Digital Millennium Copyright Act (DMCA) and the Computer Fraud and Abuse Act (CFAA) are examples. We are responding to a specific complaint that named multiple paywalls bypassing add-ons. It did not target only your add-on." This news was one of the top stories on Hacker News. People are largely opposing Mozilla’s move. “Making it harder to install addons (and breaking all the old ones) is one of the things contributing to Mozilla losing share to Chrome. People used to use Firefox over Chrome because of all the great addons, which they then broke, leaving users with less reason not to use Chrome.” “I used to default to Firefox for work. Then they killed the old addons, which broke a major part of my workflow (FireFTP's "open a file and as you edit it it automatically re-uploads" feature). So there was a lot less keeping me stuck to it. ” “This extension just seems to strip tracking data and pretend to be a Google bot. It baffles me that this is somehow concerning enough to be taken down. And anyway, isn't making exemptions for Google's robots sort-of against their policy?” Users offered advice and suggestions to Daigniere on how he can go about with the process. “I would consult with an attorney to determine legal options for an adequate defense and expected expenses. A consult is not a contract and you can change your mind if you are unwilling to take the risk with a lawsuit. I suspect the takedown notice is a DMCA takedown based upon a flawed assumption of the law. The hard part about this is arguing the technical merits of the case before non-technical people. While the takedown notice is probably in error they could still make a good argument around bypassing their security controls. You could appeal to the EFF or ACLU. If they are willing to take your case it will be pro bono.” “I'd just move on. To be honest sites with those types of paywalls should not be indexed. The loophole you are taking advantage of here is a bait and switch by these sites. They want the search traffic but don't want public access. Most of us have already adapted, however, and avoid these sites or pay for them. Your plugin title blatantly describes that you're avoiding paying for something they are charging for so even though it may not be illegal it's not something I'd waste energy fighting for.” “Rename the plugin and change the description. The message from Mozilla states that the problem is the intent of the plugin. The technological measures it actually takes are not illegal per sé, but are illegal when used to circumvent paywalls. If you present this as a plug-in that allows you to view websites as the Google bot views them, for educational and debugging purposes, there is no problem. You can give the fact that it won’t see the paywall as an example. It’s actually useful for that purpose: you are not lying. It’s just that most people will install the plugin for its ‘side effects’. Their use of it will still be illegal, but the intent will not be illegal.” Read more of this conversation on Hacker News. The State of Mozilla 2017 report focuses on internet health and user privacy Mozilla criticizes EU’s terrorist content regulation proposal, says it’s a threat to user rights Mozilla v. FCC: Mozilla challenges FCC’s elimination of net neutrality protection rules
Read more
  • 0
  • 0
  • 14393
Modal Close icon
Modal Close icon