Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-python-software-foundation-and-jetbrains-python-developers-survey-2018
Prasad Ramesh
07 Feb 2019
2 min read
Save for later

Python Software foundation and JetBrains’ Python Developers Survey 2018

Prasad Ramesh
07 Feb 2019
2 min read
The Python Software Foundation and JetBrains conducted a Python survey to find the latest trends, usage, adoption in the Python community. There were more than 20,000 participants from over 150 countries. The Python Developers Survey 2018 was conducted for the second time in collaboration after the first one in 2017. Language usage From the Python survey, 84% of the developers stated that they use it as their primary language while the other 16% used it as a secondary language. This is up from the 79% of developers using Python as primary from 2017. About 50% of Python users also use JavaScript while other languages like C/C++, Java, and C# are used lesser than 2017. Bash/Shell is also being used more by Python developers. Python uses 60% of the respondents said that they use Python for both work and personal uses. 21% exclusively for personal, educational or side projects and 19% for work. 58% of Python users use the language for data analysis which is 8% more than last year. 52% use Python for web development and 43% for DevOps/system administration. Machine learning uses also saw an uptick of 7% and stands at 38%. In general, Python is used in data analysis applications more than for Web Development. The above numbers where multiple areas were available as choices. When there was only a single response available, web development was the most popular answer with 27%. Data analysis stood at 17% and machine learning at 11%. Interestingly, if you consider ‘data science’ data analysis and machine learning combined then most Python users are in this area totaling 28%. Python versions in use Python 3 is seeing larger adoption with 84% compared to 75% from 2017. Python 2 stands at 16% and will lose support from the core team next year. Major libraries are already dropping support for Python 2. Frameworks and libraries In web frameworks, Flask and Django were the most popular with 47% and 45%. In data science packages, NumPy was the most used with 62%. pandas and Matplotlib stand at 51% and 46%. To know more in-depth results of the Python survey, you can visit the JetBrains website. pandas will drop support for Python 2 this month with pandas 0.24 Python steering council election results are out for January 2019 Python 3.8.0 alpha 1 is now available for testing
Read more
  • 0
  • 0
  • 14438

article-image-what-if-ais-could-collaborate-using-human-like-values-deepmind-researchers-propose-a-hanabi-platform
Prasad Ramesh
06 Feb 2019
3 min read
Save for later

What if AIs could collaborate using human-like values? DeepMind researchers propose a Hanabi platform.

Prasad Ramesh
06 Feb 2019
3 min read
A paper with inputs from 15 researchers talks about artificial intelligence systems playing a game called Hanabi in a paper titled The Hanabi Challenge: A New Frontier for AI Research. The researchers propose an experimental framework called the Hanabi Learning Environment for the AI research community to test out and advance algorithms. This can help in assessing the performance of the current state-of-the-art algorithms and techniques. What’s special about Hanabi? Hanabi is a two to five player game involving cards with numbers on them. In Hanabi, you are playing against other participants but you also need to trust the imperfect information they provide and make deductions to advance your cards. Hanabi has an imperfect nature as players cannot see their own cards. This game is a test of collaboratively sharing information with discretion. The rules are specified in detail in the research paper. Games have always been used to showcase or test the ability of artificial intelligence and machine learning, be it Go, Chess, Dota 2, or other games. So why would Hanabi be ‘A New Frontier for AI Research’? The difference is that Hanabi needs a bit of human touch to play. Factors like trust, imperfect information, and co-operation come into the picture, with this game which is why it is a good testing ground for AI applications. What’s the paper about? The idea is to test the collaboration of AI agents where the information is limited and only implicit communication is allowed. The researchers say that Hanabi increases reasoning of beliefs and intentions of other AI agents and makes them prominent. They believe that, developing techniques that instill agents with such theory will, in addition to succeeding at Hanabi, unlock ways how agents can collaborate efforts with human partners. The researchers have even introduced an open-source ‘Hanabi Learning Environment’, which is an experimental framework for other researchers to assess their techniques in the environment. To play Hanabi, the theory of mind is necessary, which revolves around human-like traits such as beliefs, intents, desires, etc The human approach of the theory of mind reasoning is important not just in how humans approach this game. It is also about how humans handle communication and interactions when multiple parties are involved.. Results and further work State-of-the-art reinforcement learning algorithms using deep learning are evaluated in the paper. In self-play, they fall short of the hand-coded Hanabi playing bots. In case of collaborative play, they do not collaborate at all. This shows that there is a lot of room for advances in this area related to theory of mind. The code for the Hanabi Learning Environment is being written in Python and C++ and will be available on DeepMind’s GitHub. Its interface is similar to OpenAI Gym. For more details about the game and how the theory will help in testing AI agent interactions, check out the research paper. Curious Minded Machine: Honda teams up with MIT and other universities to create an AI that wants to learn Technical and hidden debts in machine learning – Google engineers’ give their perspective The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence
Read more
  • 0
  • 0
  • 13884

article-image-chromium-developers-to-introduce-a-never-slow-mode-which-sets-limits-on-resource-usage
Bhagyashree R
06 Feb 2019
2 min read
Save for later

Chromium developers to introduce a “Never-Slow Mode”, which sets limits on resource usage

Bhagyashree R
06 Feb 2019
2 min read
Today, Alex Russell, a Google software engineer, submitted a patch called ‘Never-Slow Mode’ for Chromium. With this patch, various limits will be enforced for per-interaction and resources to keep the main thread clean. Russell’s patch is very similar to a bug Craig Hockenberry, a Partner at The Iconfactory, reported for WebKit, last week. He suggested adding limits on how much JavaScript code a website can load to avoid resource abuse of user computers. Here are some of the changes that will be done under this patch: Large scripts will be blocked. document.write() will be turned off Client-Hints will be enabled pervasively Resources will be buffered without ‘Content-Lenght’ set Budgets will be re-set on the interaction Long script tasks, which take more than 200ms, will pause all page execution until the next interaction. Budgets will be set for certain resource types such as script, font, CSS, and images. These are the limits that have been suggested under this patch (all the sizes are in wired size): Source: Chromium Similar to Hockenberry’s suggestion, this patch did get both negative and positive feedback from developers. Some Hacker News users believe that this will prevent web bloat. A user commented, “It's probably in Google's interest to limit web bloat that degrades UX”. Another user said, “I imagine they’re trying to encourage code splitting.” According to another Hacker News user putting hard coded limits will probably not work, “Hardcoded limits are the first tool most people reach for, but they fall apart completely when you have multiple teams working on a product, and when real-world deadlines kick in. It's like the corporate IT approach to solving problems — people can't break things if you lock everything down. But you will make them miserable and stop them doing their job”. You can check out the patch submitted by Russell at Chromium Gerrit. Chromium developers propose an alternative to webRequest API that could result in existing ad blockers’ end Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users Chromium-based Brave browser shows 22% faster page load time than its Muon-based counterpart
Read more
  • 0
  • 0
  • 5479

article-image-googles-new-chrome-extension-password-checkup-checks-if-your-username-or-password-has-been-exposed-to-a-third-party-breach
Melisha Dsouza
06 Feb 2019
2 min read
Save for later

Google’s new Chrome extension ‘Password CheckUp’ checks if your username or password has been exposed to a third party breach

Melisha Dsouza
06 Feb 2019
2 min read
Google released a new Chrome extension on Tuesday, called the  ‘Password CheckUp’. This extension will inform users if the username and password that they are currently using was stolen in any data breaches. It then sends a prompt for them to reset their password. If a user’s Google account credentials have been exposed in a third-party data breach, the company automatically resets their passwords. The new Chrome extension will ensure the same level of protection to all services on the web. On installing, Password Checkup will appear in the browser bar as a green shield. The extension will then check the login details against a database of around four billion usernames and passwords. If a match is found, a dialogue box prompting users to “Change your password” will appear and the icon will turn bright red. Source: Google Password Checkup was designed by Google along with cryptography experts at Stanford University, keeping in mind that Google should not be able to capture a user’s credentials, to prevent a “wider exposure” of the situation. Google’s blog states “We also designed Password Checkup to prevent an attacker from abusing Password Checkup to reveal unsafe usernames and passwords.”   Password Checkup uses multiple rounds of hashing, k-anonymity, private information retrieval, and a technique called blinding to achieve encryption of the user’s credentials. You can check out Google’s blog for technical details on the extension. Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations Meet Carlo, a web rendering surface for Node applications by the Google Chrome team Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications
Read more
  • 0
  • 0
  • 12506

article-image-google-launches-live-transcribe-a-free-android-app-to-make-conversations-more-accessible-for-the-deaf
Natasha Mathur
06 Feb 2019
3 min read
Save for later

Google launches Live Transcribe, a free Android app to make conversations more accessible for the deaf

Natasha Mathur
06 Feb 2019
3 min read
Google announced a new and free Android app, called, Live Transcribe, earlier this week. Live Transcribe is aimed at making real-world conversations more accessible globally for deaf and Hard of Hearing (HoH) people. Live Transcribe, powered by Google Cloud, automatically captions conversations in real-time. It supports more than 70 languages and more than 80% of the world’s population. How does Live Transcribe work? Live Transcribe combines the results of extensive user experience (UX) research with sustainable connectivity to speech processing servers. To ensure that connectivity to these servers doesn’t cause excessive data usage, the team used cloud ASR (Automated Speech Recognition) for greater accuracy. Similarly, to reduce the network data consumption required by Live Transcribe, an on-device neural network-based speech detector was implemented. https://www.youtube.com/watch?v=jLCwjIaPXwA   The on-device neural network-based speech detector is built using Google’s dataset for audio event research, called AudioSet, announced last year. AudioSet is an image-like model that is capable of detecting speech, automatically managing network connections to the cloud ASR engine, and minimizing data usage over long periods of use. Additionally, the Google team partnered with Gallaudet University to make Live Transcribe intuitive, with the help of user experience research collaborations. This, in turn, would ensure that the core user needs are satisfied while maximizing the app’s potential. Google considered different devices ranging from computers, tablets, smartphones, and small projectors, etc., to effectively display auditory information and captions. After rigorous analysis, Google decided to choose smartphones because of its ” sheer ubiquity” and enhanced capabilities. Addressing transcription confidence level issue Google mentions that while building Live Transcribe, they faced a challenge regarding displaying transcription confidence. The researchers explored if they needed to show word-level or phrase-level confidence, as it was traditionally considered to be helpful. Using previous UX research, they found out that a transcript is easiest to read when it is not layered and focuses on the better presentation of the text, thus supplementing it with other auditory signals apart from speech signals. Another useful UX signal is the noise level of the current environment and to address this, researchers built an indicator that visualizes the volume of user speech relative to background noise. This helps provide users instant feedback on microphone performance, allowing them to adjust the placement of the phone. What next? To enhance the capabilities of this mobile-based automatic speech transcription service, researchers plan to include on-device recognition, speaker-separation, and speech enhancement. “Our research with Gallaudet University shows that combining it with other auditory signals like speech detection and a loudness indicator makes a tangibly meaningful change in communication options for our users”, state the researchers. Google has currently rolled out the test version of Live Transcribe on Play Store, and it has been pre-installed on all Pixel 3 devices with the latest update. Public reaction to the news has been largely positive, with people appreciating the newly released app: https://twitter.com/MattWilliams84/status/1092510959988629505 https://twitter.com/iamAbhisarW/status/1092642493504589826 https://twitter.com/seanmarnold/status/1092508455200587776 For more information, check out the official Live Transcribe blog. Transformer-XL: A Google architecture with 80% longer dependency than RNNs Google News Initiative partners with Google AI to help ‘deep fake’ audio detection research Google Cloud Firestore, the serverless, NoSQL document database, is now generally available
Read more
  • 0
  • 0
  • 12283

article-image-article-13-back-on-track-france-and-germany-join-hands-to-save-the-eus-copyright-directive
Melisha Dsouza
06 Feb 2019
3 min read
Save for later

Article 13 back on Track- France and Germany join hands to save the EU's Copyright Directive

Melisha Dsouza
06 Feb 2019
3 min read
Last month, The EU Copyright directive had been put on hold since the European Council (with representatives from all the member states) couldn’t establish a level ground for Article 13. 11 member nations voted against the law causing the final “trilogue” meeting (at which the law was supposed to be finalized) to be called off. According to the member states, Article 13 is ‘insufficiently protective of users’ rights.’ While most of the state governments remained in favor of Article 13, there was a certain disagreement about the details of this law. France and Germany couldn’t agree on which internet platforms should install upload filters to censor their users’ posts. The disagreement has been resolved, and the process of enacting the law is back in motion. This time- making the law even worse, says the EEF. This is because, after a lot of back and forth, Germany and France have come to an agreement that will possibly affect tons of smaller sites as well as the larger ones, with hardly any protection to sites that host user-generated content. Julia Reeda, a German politician and Member of the European Parliament, uploaded the Franco-German deal [PDF], that was leaked today and which shows that Upload filters must be installed by everyone except those services which fit all three of the following “extremely narrow criteria”: Available to the public for less than 3 years Annual turnover below €10 million Fewer than 5 million unique monthly visitors BoingBoing.net summarises the above saying, every single online platform where the public can communicate and that has been in operation for three years or more must immediately buy filters. The size of the company does not matter. Once a platform makes €5,000,000 in a year, it will be obligated to implement "copyright filters as well. And finally, every site must demonstrate that it has taken 'best efforts' to license anything that their users might conceivably upload. This means that any time a rightsholder offers the site a license for content that their users might use, they are obliged to buy it from them, at whatever price they name. The next step for this draft is that the national negotiators for EU member states approve the deal, and then a final vote in the European Parliament. If the law is finalised, there would be an enormous investment of money needed. Copyright filters will cost hundreds of millions of euros to develop and maintain. Besides the monetary aspect, the law may also block legitimate speech that probably uses copyrighted works to get a point across and is incorrectly identified as containing copyrighted works. The petition opposing this law is now the largest petition in European history. You can head over to Techdirt for more insights on this news. Lawmakers introduce new Consumer privacy bill and Malicious Deep Fake Prohibition Act to support consumer privacy and battle deepfakes Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager Russia opens civil cases against Facebook and Twitter over local data laws  
Read more
  • 0
  • 0
  • 9382
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-allen-institute-of-artificial-intelligence-releases-iconary-an-ai-pictionary-game-which-allows-humans-and-ai-to-play-together
Sugandha Lahoti
06 Feb 2019
3 min read
Save for later

Allen Institute of Artificial Intelligence releases Iconary, an AI Pictionary game which allows humans and AI to play together

Sugandha Lahoti
06 Feb 2019
3 min read
Artificial Intelligence has been climbing on the success of playing difficult classic board games like Chess and Go to complex multiplayer online games like DOTA 2 and StarCraft. Last month, Google DeepMind’s AI AlphaStar defeated StarCraft II pros. Unity also launched an ‘Obstacle Tower Challenge’ to test AI game players. In a similar move, yesterday Allen Institute for Artificial Intelligence released Iconary an AI Pictionary game which allows you to collaborate with an Artificial Intelligence software. It’s not a man vs machine but more of a man and machine collaborative game. Per the researchers behind this game, “Iconary is a breakthrough AI game in that it is the first Common Sense AI game involving language, vision, interpretation and reasoning.” Gameplay Iconary offers players a limited set of icons along with a phrase describing a situation. Players need to use the icon set to compose a scene that represents the phrase and the AllenAI will try to guess it correctly. It can also update its compositions based on its human partner's guesses to help successfully guide them towards the correct phrase. The AI plays both on the drawing side and guessing side. For guessing, the Artificial Intelligence arranges icons and the players have to guess the phrase. There are over 75,000 phrases supported in Iconary, with more being added regularly. However, the astonishing thing is that there are uncountable ways of representing them. This is challenging for an AI system, according to researcher Ani Kembhavi, “because it tests a wide range of common sense skills. The algorithms must first identify the visual elements in the picture, figure out how they relate to one another, and then translate that scene into simple language that humans can understand. This is why Pictionary could teach computers information that other AI benchmarks like Go and StarCraft can’t” The main goal of Iconary is to help AI systems come to an understanding of what humans are asking of it. This will help in overcoming multiple roadblocks in simple tasks by having humans and AI understand complex phrases. The researchers write, “AllenAI has never before encountered the unique phrases in Iconary, yet our preliminary games have shown that our AI system is able to both successfully depict and understand phrases with a human partner with an often surprising deftness and nuance.” You may give Iconary a try at iconary.allenai.org. Introducing SCRIPT-8, an 8-bit JavaScript-based fantasy computer to make retro-looking games Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games Electronic Arts (EA) announces Project Atlas, a futuristic cloud-based AI powered game development platform
Read more
  • 0
  • 0
  • 2379

article-image-exclusivity-enforcement-is-now-complete-in-swift-5
Prasad Ramesh
06 Feb 2019
2 min read
Save for later

Exclusivity enforcement is now complete in Swift 5

Prasad Ramesh
06 Feb 2019
2 min read
Yesterday Apple talked about exclusivity enforcement in Swift 5, in a post. No this is not some exclusive feature or patenting of some sort. This idea is on how variables in a Swift program access memory. Swift is the programming language used for developing Apple apps. What is exclusivity enforcement? The Swift 5 release allows runtime checks on “Exclusive Access to Memory”. This further adds to Swift showing that it is a ‘safe language’. For memory safety to take place, Swift needs exclusive access to a variable and modify it. This means that the variable can be accessed only with the same name when it is being modified as particular arguments. A programmer’s intention in case of exclusivity violations is often ambiguous in Swift. So, to protect against it and to allow the safety features, exclusivity enforcement was introduced in Swift 4. In Swift 4, both compile-time and run-time enforcement was available, the latter being available only in debug builds. Some of the holes in exclusivity enforcement are patched in Swift 5 by changing the language model. So runtime exclusivity enforcement is enabled by default in Release builds. This can impact Swift projects in two ways: Violation of Swift exclusivity rules causing a runtime trap Overhead due to memory access checks can degrade performance slightly Why exclusivity enforcement? This enforcement is done mainly to enforce memory safety in Swift. It eliminates dangerous interactions in Swift programs which involves mutable states Enforcement gets rid of unspecified behavior rules from Swift It is mandatory to maintain ABI stability In addition to protecting memory safety, this enforcement helps in optimizing performance The exclusivity rules give programmers the control to move only types Even though the memory problem is a rare occurrence, addressing it early on improves Swift a bit. A comment on Hacker news says: “The benefit being that you only have to deal with this issue rarely, rather than all the time with manual memory management.” Apple is patenting Swift features like optional chaining Swift 5 for Xcode 10.2 beta is here with stable ABI Swift is now available on Fedora 28
Read more
  • 0
  • 0
  • 12043

article-image-the-electron-team-publicly-shares-the-release-timeline-for-electron-5-0
Bhagyashree R
06 Feb 2019
2 min read
Save for later

The Electron team publicly shares the release timeline for Electron 5.0

Bhagyashree R
06 Feb 2019
2 min read
Last week, the team behind Electron announced that they will share the release timeline for Electron 5.0 and beyond publicly. For now, they have posted the schedule for Electron 5.0, which will include M72 and Node 12.0. Here’s the schedule the team has made public, which can still have some changes: Source: Electron The Electron team has been working towards making its release cycles faster and more stable. In December last year, they planned to release a new version of Electron in cadence with the new versions of its major components including Chromium, V8, and Node. They started working according to this plan and have seen some success. Source: Electron The team was able to release last two versions of Electron (3.0 and 4.0) almost parallelly with Chromium’s latest versions. To keep up with Chromium releases, the two versions were released with a 2-3 month timeline for each release. They will be continuing this pattern for Electron 5.0 and beyond. So, for now, developers can expect a major Electron release approximately every quarter. Read the timeline shared by the Electron team on their official website. Flutter challenges Electron, soon to release a desktop client to accelerate mobile development Electron 3.0.0 releases with experimental textfield, and button APIs Electron Fiddle: A ‘code playground’ for experimenting with cross-platform native apps
Read more
  • 0
  • 0
  • 18242

article-image-mandrill-email-api-outage-unresolved-leaving-users-frustrated
Savia Lobo
06 Feb 2019
2 min read
Save for later

Mandrill email API outage unresolved; leaving users frustrated

Savia Lobo
06 Feb 2019
2 min read
At the beginning of this week, Mandrill, a transactional email API for MailChimp users, experienced an outage where users were able to send but were unable to receive emails. The Madrill community also tweeted stating that they were also seeing ongoing errors with scheduled mail and webhooks and would resolve the issue soon. https://twitter.com/mandrillapp/status/1092611488982945793 Sebastian Lauwers, the VP of Engineering at Dixa, a customer service software tweeted that the issue took too long to resolve. He also asked for the reason why Mandrill was taking so long--nearly 23 hours--to sort the issue. https://twitter.com/teotwaki/status/1092624972252618754 Today, one of the users with the username GuyPostington posted an email received from Mandrill, on HackerNews. The email explains the reason for Mandrill’s outage and how they will be addressing the issue. Mandrill uses a sharded Postgres setup as one of their main datastores. According to the email, “On Sunday, February 3, at 10:30 pm EST, 1 of our 5 physical Postgres instances saw a significant spike in writes. The spike in writes triggered a Transaction ID Wraparound issue. When this occurs, database activity is completely halted. The database sets itself in read-only mode until offline maintenance (known as vacuuming) can occur.” They have also tweeted the same They further mentioned that the database is large due to which the vacuum process takes a significant amount of time and resources, and there’s no clear way to track progress. To address this issue, the community writes, “We don’t have an estimated time for when the vacuum process and cleanup work will be complete. While we have a parallel set of tasks going to try to get the database back in working order, these efforts are also slow and difficult with a database of this size. We’re trying everything we can to finish this process as quickly as possible, but this could take several days, or longer.” The email also states that once the outage is resolved, the community plans to offer refunds to all the affected users. To know about this news in detail, visit Mandrill’s Tweet thread. Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records Internet Outage or Internet Manipulation? New America lists government interference, DDoS attacks as top reasons for Internet Outages across the world Outage in the Microsoft 365 and Gmail made users unable to log into their accounts
Read more
  • 0
  • 0
  • 2712
article-image-red-hat-announces-codeready-workspaces-the-first-kubernetes-native-ide-for-easy-collaboration-among-developers
Natasha Mathur
06 Feb 2019
2 min read
Save for later

Red Hat announces CodeReady Workspaces, the first Kubernetes-Native IDE for easy collaboration among developers

Natasha Mathur
06 Feb 2019
2 min read
Red Hat has come out with a new Integrated development environment (IDE), called CodeReady Workspaces. The new IDE, announced yesterday, is a Kubernetes-native, browser-based environment for developers to enable easier collaboration among the development team. CodeReady Workspaces is based on the open source Eclipse Che integrated development environment (IDE) project and has been optimized for Red Hat OpenShift and Red Hat Enterprise Linux. It offers the enterprise development teams a shareable developer environment, which comprises the tools and dependencies required to code, build, test, run and debug the container-based applications. Other than that, it is the first Kubernetes-native IDE in the market that runs inside a Kubernetes cluster and is capable of managing the developer’s code, and dependencies present inside OpenShift pods and containers. Also, there is no need for the developers to be Kubernetes or OpenShift experts in order to use the IDE. Additionally, Red Hat CodeReady Workspaces comes with a sharing feature called Factories, which is a template containing the source code location, runtime, tooling configuration and commands that are needed for a project. It enables development teams to run the new Kubernetes-native developer environment in a few minutes. Team members can access their own or shared workspaces on any device with a browser and on any IDE and operating system (OS). According to the Red Hat team, CodeReady Workspaces is an ideal platform for DevOps based organizations and allows the IT or development teams to manage workspaces at scale and effectively control system performance, security features and functionality. CodeReady Workspaces allows the developers to: Integrate their preferred version control (public and private repositories). Control workspace permissions and resourcing Make use of Lightweight Directory Access Protocol (LDAP) or Active Directory (AD) authentication for single sign-on. “Red Hat CodeReady Workspaces offers enterprise development teams a collaborative and scalable platform that can enable developers to more efficiently and effectively deliver new applications for Kubernetes and collaborate on container-native applications”, said Brad Micklea, senior director, Developer Experience and Programs, Red Hat. Red Hat CodeReady Workspaces is free with an OpenShift subscription. You can download it by joining the Red Hat Developer Program. For more information, check out the official Red Hat CodeReady Workspaces blog. Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL) Red Hat acquires Israeli multi-cloud storage software company, NooBaa Red Hat announces full support for Clang/LLVM, Go, and Rust
Read more
  • 0
  • 0
  • 10863

article-image-tensorflow-js-architecture-and-applications
Bhagyashree R
05 Feb 2019
4 min read
Save for later

TensorFlow.js: Architecture and applications

Bhagyashree R
05 Feb 2019
4 min read
In a paper published last month, Google developers explained the design, API, and implementation of TensorFlow.js, the JavaScript implementation of TensorFlow. TensorFlow.js was first introduced at the TensorFlow Dev Summit 2018. It is basically the successor of deeplearn.js, which was released in August 2017, and is now named as TensorFlow.js Core. Google’s motivation behind creating TensorFlow.js was to bring machine learning in the hands of web developers who generally do not have much experience with machine learning. It also aims at allowing experienced ML users and teaching enthusiasts to easily migrate their work to JS. The TensorFlow.js architecture TensorFlow.js, as the name suggests, is based on TensorFlow, with a few exceptions specific to the JS environment. This library comes with the following two sets of APIs: The Ops API facilitates lower-level linear algebra operations such as matrix, multiplication, tensor addition, and so on. The Layers API, similar to the Keras API, provide developers high-level model building blocks and best practices with emphasis on neural networks. Source: TensorFlow.js TensorFlow.js backends In order to support device-specific kernel implementations, TensorFlow.js has a concept of backends. Currently it supports three backends: the browser, WebGL, and Node.js. The two new rising web standards, WebAssembly and WebGPU, will also be supported as a backend by TensorFlow.js in the future. To utilize the GPU for fast parallelized computations, TensorFlow.js relies on WebGL, a cross-platform web standard that provides low-level 3D graphics APIs. Among the three TensorFlow.js backends, the WebGL backend has the highest complexity. With the introduction of Node.js and event-driven programming, the use of JS in server-side applications has grown over time. Server-side JS has full access to the filesystem, native operating system kernel, and existing C and C++ libraries. In order to support the server-side use cases of machine learning in JavaScript, TensorFlow.js comes with a Node.js backend that binds to the official TensorFlow C API using the N-API. As a fallback, TensorFlow.js provides a slower CPU implementation in plain JS. This fallback can run in any execution environment and is automatically used when the environment has no access to WebGL or the TensorFlow binary. Current applications of TensorFlow.js Since its launch, TensorFlow.js have seen its applications in various domains. Here are some of the interesting examples the paper lists: Gestural Interfaces TensorFlow.js is being used in applications that take gestural inputs with the help of webcam. Developers are using this library to build applications that translate sign language to speech translation, enable individuals with limited motor ability control a web browser with their face, and perform real-time facial recognition and pose-detection. Research dissemination The library has facilitated ML researchers to make their algorithms more accessible to others. For instance, the Magenta.js library, developed by the Magenta team, provides in-browser access to generative music models. Porting to the web with TensorFlow.js has increased the visibility of their work with their audience, namely musicians. Desktop and production applications In addition to web development, JavaScript has been used to develop desktop and production applications. Node Clinic, an open source performance profiling tool, recently integrated a TensorFlow.js model to separate CPU usage spikes by the user from those caused by Node.js internals. Another example is, Mood.gg Desktop, which is a desktop application powered by Electron, a popular JavaScript framework for writing cross-platform desktop apps. With the help of TensorFlow.js, Mood.gg detects which character the user is playing in the game called Overwatch, by looking at the user’s screen. It then plays a custom soundtrack from a music streaming site that matches with the playing style of the in-game character. Read the paper, Tensorflow.js: Machine Learning for the Web and Beyond, for more details. TensorFlow.js 0.11.1 releases! Emoji Scavenger Hunt showcases TensorFlow.js 16 JavaScript frameworks developers should learn in 2019
Read more
  • 0
  • 0
  • 18404

article-image-transformer-xl-a-google-architecture-with-80-longer-dependency-than-rnns
Natasha Mathur
05 Feb 2019
3 min read
Save for later

Transformer-XL: A Google architecture with 80% longer dependency than RNNs

Natasha Mathur
05 Feb 2019
3 min read
A group of researchers from Google AI and Carnegie Mellon University announced the details regarding their newly proposed architecture, called, Transformer-XL (extra long), yesterday. It’s aimed at improving natural language understanding beyond a fixed-length context with higher self-attention. Fixed-length context is a long text sequence truncated into fixed-length segments of a few hundred characters. Researchers have used two methods to quantitatively study the effective lengths of Transformer-XL and the baselines, namely, segment-level recurrence mechanism and a relative positional encoding scheme. Let’s have a look at these key techniques in detail. Segment-level recurrence Recurrence mechanism helps address the limitations of using a fixed-length context. During the training process, the hidden state sequences computed in the previous segment are fixed and cached. These are then reused as an extended context once the model starts processing the next new segment.   Segment level recurrence This connection then further increases the largest possible dependency length by N times (N  being the depth of the network) as contextual information can flow across segment boundaries. The recurrence mechanism also resolves the context fragmentation issue. Moreover, with the help of recurrence mechanism applied to every two consecutive segments of a corpus, a segment-level recurrence is created in the hidden states. This, in turn, helps with effective context being utilized beyond the two segments. Apart from being able to achieve extra long context and resolving the fragmentation issue, recurrence mechanism also helps with significantly faster evaluation. Relative Positional Encodings Although the segment-level recurrence technique is effective, there is a technical challenge that involves reusing the hidden states. The challenge is to keep the positional information coherent while reusing the states. Applying segment-level recurrence does not work in this case does not work as the positional encodings are not coherent when reusing the previous segments. This is where the relative positional encoding scheme comes into the picture to make the recurrence mechanism possible. The relative positional encodings make use of fixed embeddings with learnable transformations instead of learnable embeddings. This makes it more generalizable to longer sequences at test time. The core idea behind the technique is to only encode the relative positional information in the hidden states. “Our formulation uses fixed embeddings with learnable transformations instead of learnable embeddings and thus is more generalizable to longer sequences at test time”, state the researchers. With both the approaches combined, Transformer-XL has a much longer effective context and is able to process the elements in a new segment without any recomputation. Results Transformer-XL obtains new results on a variety of major language modeling (LM) benchmarks. It is the first self-attention model that is able to achieve better results than RNNs on both character-level and word-level language modeling. It is able to model longer-term dependency than RNNs and Transformer. Transformer-XL has the following three benefits: Transformer-XL dependency is about 80% longer than RNNs and 450% longer than vanilla Transformers. Transformer-XL is up to 1,800+ times faster than a vanilla Transformer during evaluation of language modeling tasks as no re-computation is needed. Transformer-XL has better performance in perplexity on long sequences due to long-term dependency modeling, and on short sequences by resolving the context fragmentation problem. For more information, check out the official Transformer XL research paper. Researchers build a deep neural network that can detect and classify arrhythmias with cardiologist-level accuracy Researchers introduce a machine learning model where the learning cannot be proved Researchers design ‘AnonPrint’ for safer QR-code mobile payment: ACSC 2018 Conference
Read more
  • 0
  • 0
  • 15916
article-image-mozilla-blocks-auto-playing-audible-media-in-firefox-66-for-desktop-and-firefox-for-android
Amrata Joshi
05 Feb 2019
3 min read
Save for later

Mozilla blocks auto playing audible media in Firefox 66 for desktop and Firefox for Android

Amrata Joshi
05 Feb 2019
3 min read
While clicking on a link or opening a new browser tab, sometimes, audible video or audio starts playing automatically which is a distraction for many users. Mozilla is trying to solve this problem by introducing an autoplay blocking feature in Firefox 66 for desktop and Android. Firefox 66, expected to roll out on 19th March 2019, will now block audible audio and video by default. Lately, Mozilla has been working towards blocking all auto-playing content. Last year, Mozilla announced that Firefox will no longer allow auto-playing audio in order to cut down on annoying advertisements. Microsoft's Edge, Google's Chrome and, Apple's Safari browser also have taken steps in order to limit auto-playing media. Firefox 66 will allow a site to play audio or video aloud only via the HTMLMediaElement API, once a web page has some kind of user interaction (such as mouse click on the play button) to initiate the audio. In this case, any playback that happens before the user has interacted with a page via a mouse click, printable key press, or touch event, will be counted as autoplay and will be blocked if it is audible. Firefox expresses a blocked play() call to JavaScript by rejecting HTMLMediaElement.play() with a NotAllowedError. This is how most browsers function to express a block. Muted autoplay is still allowed as the script can set the “muted” attribute on HTMLMediaElement to true, and autoplay will work. The existing block autoplay implementation in Android and desktop will be replaced with this new feature. According to Mozilla’s official blog post, “In general, the advice for web authors when calling HTMLMediaElement.play(), is to not assume that calls to play() will always succeed, and to always handle the promise returned by play() being rejected.” Users can still opt out of this blocker, for sites that they don't mind playing media by default. There will be an icon which will pop up in the URL bar to indicate that auto-playing media has been blocked, and clicking on it will bring up a menu that will allow users to change the settings. In order to avoid having the audible playback blocked, website owners should only play media inside a click or keyboard event handler, or on mobile in a touchend event. Firefox 66 will also automatically allow autoplaying video on sites that the user has granted access to their cameras and microphones. Since these sites are typically for video conferencing, it would make sense allow if they work uninterrupted. Mozilla is also working on blocking autoplay for Web Audio content, but have not yet finalized the implementation. The autoplay Web Audio content blocking is expected to be shipped some time this year. Mozilla releases Firefox 65 with support for AV1, enhanced tracking protection, and more! Firefox now comes with a Facebook Container extension to prevent Facebook from tracking user’s web activity Mozilla disables the by default Adobe Flash plugin support in Firefox Nightly 69  
Read more
  • 0
  • 0
  • 5682

article-image-slack-confidentially-files-to-go-public
Amrata Joshi
05 Feb 2019
3 min read
Save for later

Slack confidentially files to go public

Amrata Joshi
05 Feb 2019
3 min read
Yesterday, Slack Technologies confidentially filed with the Securities and Exchange Commission to go public in the U.S. for listing its shares publicly. The company would go with a direct listing in the stock market and might get into a race with Lyft, Uber, and Airbnb to become the next major company to use the non-traditional method for an Initial Public Offering (IPO) after Spotify. Last year Spotify decided to sell its shares in an IPO directly to regular people rather than to a pre-chosen group of its bankers’ friends in a move which is known as a direct listing. A company that opts for direct listing doesn’t create or sell any new stock and therefore doesn’t raise any money but current shareholders sell their preexisting shares. Slack had about $900 million in cash on its balance sheet as of October 2018, according to The Information. Last year, in December, Slack hired Goldman Sachs to lead its IPO as an underwriter and was seeking a valuation of more than $10 billion in its IPO as reported by Reuters. According to a report by Crunchbase, Slack has raised about $1 billion so far. The global growth concerns and U.S.-China trade issues have an impact on the equity markets. Many companies have pulled IPOs from the markets, stating “unfavorable economic conditions”, with the number rising since the U.S. government shutdown. It would be interesting to see what step the company takes. According to few users, this move will be beneficial for Slack. One of the comments on HackerNews reads, “Nothing is wrong with the market: Slack may have decided that this is the best way for them to create liquidity. There is also a cap (2000) on the number of shareholders a company can have before they have to abide by what amounts to the same reporting requirements as a publicly traded company. Slack also get the advantage of the usual market pop of acquiring companies share prices that usually amounts to a significant % of the cash value of the transaction.” According to few others, the company will have huge leverage with stock compensation it would be able to buy other companies because of the access to funding. To know more about this news, check out the official press release. Slack has terminated the accounts of some Iranian users, citing U.S. sanctions as the reason Airtable, a Slack-like coding platform for non-techies, raises $100 million in funding Atlassian sells Hipchat IP to Slack
Read more
  • 0
  • 0
  • 13368
Modal Close icon
Modal Close icon