Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-google-releases-magenta-studio-beta-an-open-source-python-machine-learning-library-for-music-artists
Melisha Dsouza
14 Nov 2018
3 min read
Save for later

Google releases Magenta studio beta, an open source python machine learning library for music artists

Melisha Dsouza
14 Nov 2018
3 min read
On 11th November, the Google Brain Team released Magenta studio in beta, a suite of free music-making tools using their machine learning models. It is a collection of music plugins built on Magenta’s open source tools and models. These tools are available both as standalone Electron applications as well as plugins for Ableton Live. What is Project Magenta? Magenta is a research project which was started by some researchers and engineers from the Google Brain team with significant contributions from many other stakeholders. The project explores the role of machine learning in the process of creating art and music. It primarily involves developing new deep learning and reinforcement learning algorithms to generate songs, images, drawings, and other materials. It also explores the possibility of building smart tools and interfaces to allow artists and musicians to extend their processes using these models. Magenta is powered by TensorFlow and is distributed as an open source Python library. This library allows users to manipulate music and image data which can then be used to train machine learning models. They can generate new content from these models. The project aims to demonstrate that machine learning can be utilized to enable and enhance the creative potential of all people. If the Magenta studio is used via Ableton, the Ableton Live plugin reads and writes clips from Ableton's Session View. If a user chooses to run the studio as a standalone application, the standalone application reads and writes files from a users file system without requiring Ableton. Some of the demos include: #1 Piano Scribe Many of the generative models in Magenta.js requires the input to be a symbolic representation like Musical Instrument Digital Interface (MIDI). But now, Magenta Converts raw audio to MIDI using Onsets and Frames which  a neural network trained for polyphonic piano transcription. This means that only audio is enough to obtain an output of MIDI in the browser. #2 Beat Blender The Beat Bender is built by Google Creative Lab using MusicVAE. Users can now generate two dimensional palettes of drum beats and draw paths through the latent space to create evolving beats. #3 Tenori-of Users can utilize the Magenta.js to generate drum patterns when they hit the “Improvise” button. This is more like a take on an electronic sequencer. #4 NSynth Super This is machine learning algorithm using deep neural network to learn the characteristics of sounds, and then create a completely new sound based on these characteristics. NSynth synthesizes an entirely new sound using the acoustic qualities of the original sounds. For instance, users can get a sound that’s part flute and part sitar all at once. You can head over to the Magenta Blog for more exciting demos. Alternatively, head over to magenta.tensorflow.org to read more about this announcement. Worldwide Outage: YouTube, Facebook, and Google Cloud go down affecting thousands of users Intel Optane DC Persistent Memory available first on Google Cloud Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more
Read more
  • 0
  • 0
  • 18438

article-image-alphabets-waymo-to-launch-the-worlds-first-commercial-self-driving-cars-next-month
Prasad Ramesh
14 Nov 2018
2 min read
Save for later

Alphabet’s Waymo to launch the world’s first commercial self driving cars next month

Prasad Ramesh
14 Nov 2018
2 min read
Waymo plans to launch the world’s first commercial driverless car service by December. What first began at Google, was rebranded in 2016 and brought directly under the parent company, Alphabet. Waymo already is on road for a small group of 400 families in the Phoenix area and is now expanding with a license in California. They plan to continue expanding, getting licenses in areas as they do. End of last month Waymo acquired a license from the California Department of Motor Vehicles (DMV) to run driverless cars on public roads. Businesses are expected to be the main customers. Waymo gets a permit from California DMV The permit will allow Waymo to drive in both day and night with a speed limit of 65 mph. They state in a blog post: “Our vehicles can safely handle fog and light rain,”. The company has collected data by driving millions of miles through years of driving to train the artificial intelligence system in use. When faced with a situation it does not understand, a self-driving car will wait to understand how to proceed. They also have fleet and rider support with humans to solve any issues that the self-driving car cannot. Waymo has deals with companies like Fiat and Jaguar to make thousands of vehicles driverless. Waymo systems drove millions and billions of miles Other than the 10 million real miles, the Waymo system was also subject to 7 billion simulated miles to make the self-driving tech an experienced driver. There will also be backup drivers in some cars to take over if necessary so that the riders are at ease of mind. John Krafcik, Waymo CEO said to The Wall Street Journal on Tuesday, that this service will be available for customers as well as businesses. What is surprising is that companies like Walmart, Avis Budget Group Inc., and AutoNation Inc. are also interested in this service and are willing to pay for their customers’ rides. For more details, read the Waymo blog post. Lyft acquires computer vision startup Blue Vision Labs, in a bid to win the self driving car race This self-driving car can drive in its imagination using deep reinforcement learning Tesla is building its own AI hardware for self-driving car
Read more
  • 0
  • 0
  • 12695

article-image-introducing-firefox-sync-centered-around-user-privacy
Melisha Dsouza
14 Nov 2018
4 min read
Save for later

Introducing Firefox Sync centered around user privacy

Melisha Dsouza
14 Nov 2018
4 min read
“Ensure the Internet is a global public resource… where individuals can shape their own experience and are empowered, safe and independent.” -Team Mozilla Yesterday, Firefox explained the idea behind Firefox Sync as well as how the tool was built keeping in mind user’s privacy. Because sharing data with a provider is a norm, the team found it important to highlight the privacy aspects of Firefox Sync. What is Firefox Sync? Firefox Sync lets a user share their bookmarks, browsing history, passwords and other browser data between different devices, and send tabs from one device to another. This feature re-defines how users interact with the web. Users can log on to Firefox with Firefox sync, using the same account across multiple devices. They can even access the same sessions on swapping devices. With one easy sign-in, Firefox sync helps users access their bookmarks, tabs, and passwords. Sync allows users logged on from one device to be simultaneously logged on to other devices. Which means that tasks that started on a user’s laptop in the morning can be picked up on their phone even later in the day. Why is Firefox Sync Secure? By default, Firefox Sync protects all user synced data so Mozilla can’t read it. When a user signs up for sync with a strong passphrase, their data is protected from both attackers and from Mozilla.  Mozilla encrypts all of a user’s synced data so that it is entirely unreadable without the key used to encrypt it. Ideally, even a service provider must never receive a user’s key. Firefox takes care of this aspect when a user signs into their Firefox account with a username and passphrase which are sent to the server. Traditionally, on receiving the username and passphrase at the server, it is hashed and compared with a stored hash. If a match is found, the server sends the user his data. While using Firefox, a user never sends over their passphrase. Mozilla transforms a user’s passphrase on their computer into two different, unrelated values such that the two values are independent of each other. Mozilla sends an authentication token, derived from the passphrase, to the server which serves as the password-equivalent. This means that the encryption key derived from the passphrase never leaves a user’s computer. In more technical terms, 1000 rounds of PBKDF2 is used to derive a user’s passphrase into the authentication token. On the server size, this token is hashed with scrypt so that the database of authentication tokens is even more difficult to crack. The passphrase is then derived into an encryption key using the same 1000 rounds of PBKDF2. It is domain-separated from the previously generated authentication token by using HKDF with separate info values. This key is used to unwrap an encryption key (obtained during setup and which Mozilla never see unwrapped), and that encryption key is used to protect a user data.  The key is used to encrypt user data using AES-256 in CBC mode, protected with an HMAC. Source: Mozilla Hacks How are people reacting to this feature? Sync has been well received by customers. A user on Hacker news commented how this feature makes “Firefox important”.  Sync has also been compared to Google Chrome since Chrome's sync feature collects their users' complete browsing histories. One user commented on how Mozilla’s privacy tools will make him “chose over chrome”. And since this approach is relatively simple to implement, users are also exploring the possibility of “implement a similar encryption system as a proof of concept”. In a time where respecting the privacy of a user is so unusual, Mozilla sure has caught our attention with its approach to be more “user privacy-centric”. You can head over to Mozilla’s Blog to know other approaches to building a sync feature for a browser and how Sync protects user data. Mozilla pledges to match donations to Tor crowdfunding campaign up to $500,000 Mozilla shares how AV1, the new the open source royalty-free video codec, works Mozilla introduces new Firefox Test Pilot experiments: Price Wise and Email tabs
Read more
  • 0
  • 0
  • 13477

article-image-facebook-shares-update-on-last-weeks-takedowns-of-accounts-involved-in-inauthentic-behavior
Bhagyashree R
14 Nov 2018
3 min read
Save for later

Facebook shares update on last week’s takedowns of accounts involved in “inauthentic behavior”

Bhagyashree R
14 Nov 2018
3 min read
Yesterday, Facebook shared the findings and takedowns of its last week’s investigation regarding inauthentic coordinated behavior. In order to accomplish these takedowns, they worked closely with the government, the security community, and other tech companies. Inauthentic coordinated behavior refers to people or organizations working together to create networks of accounts and Pages to mislead others about who they are, or what they’re doing. What are the findings of Facebook’s investigation? On November 4th, just a few days before the US mid-term elections, Facebook was informed by the US law enforcement about an online activity that they believed was linked to foreign entities. Facebook further investigated and found that around 30 Facebook accounts and 84 Instagram accounts were potentially engaged in an coordinated inauthentic behavior. Facebook in its Election Update said that most of the Facebook Pages associated with these accounts were in French or Russian languages: “Almost all the Facebook Pages associated with these accounts appear to be in the French or Russian languages, while the Instagram accounts seem to have mostly been in English — some were focused on celebrities, others political debate.” Combined with the takedowns of last Monday, in total, they have managed to remove 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior. These accounts were predominantly created after mid-2017 and have an impressive number of followers: “We found a total of about 1.25 million people who followed at least one of these Instagram accounts, with just over 600,000 located in the US. By comparison, the recent set of accounts that we removed which originated from Iran had around 1 million followers.” On November 6, a website claiming to be associated with the Internet Research Agency, a Russia-based troll farm, published a list of Instagram accounts they said that they’d created.  Facebook said that they have now blocked these accounts. To give the background on how they are mitigating the misuse of the platform, Facebook mentioned that they partner with external partners like the government or security experts. These partnerships have helped Facebook, especially in the lead-up to last week’s midterm elections. Nathaniel Gleicher, the Head of Cybersecurity Policy, said in his post: “And while we can remove accounts and Pages and prohibit bad actors from using Facebook, governments have additional tools to deter or punish abuse. That’s why we’re actively engaged with the Department of Homeland Security, the FBI, including their Foreign Influence Task Force, Secretaries of State across the US — as well as other government and law enforcement agencies around the world — on our efforts to detect and stop information operations, including those that target elections.” Though removing misleading pages and accounts is a right step towards making the platform free from fake news and preventing its involvement in elections, this could also result in the takedown of legitimate accounts. “Facebook took down the pages of a lot of legit people I know and follow,” said one of the Hacker News users. Head over to Facebook’s newsroom to stay updated on Facebook’s activities to mitigate its misuse. Emmanuel Macron teams up with FB in a bid to fight hate speech on social media Following Google, FB changes its forced arbitration policy for sexual harassment claims A new data breach on FB due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News
Read more
  • 0
  • 0
  • 10708

article-image-security-vulnerabilities-identified-in-washington-georgia-and-north-carolinas-voting-systems
Savia Lobo
13 Nov 2018
4 min read
Save for later

Security Vulnerabilities identified in Washington, Georgia, and North Carolina’s voting systems

Savia Lobo
13 Nov 2018
4 min read
Security gaps have been identified in both Washington State’s and North Carolina’s voter registration systems. Spotted by cybersecurity experts, these vulnerabilities could potentially be exploited to interfere with citizens’ eligibility to cast ballots in last week’s elections. Fortunately, it seems like that hasn’t happened. Officials in both Washington and North Carolina expressed confidence they would spot any widespread tampering with voter registration records. According to The Seattle Times, “cyber experts said Washington appears to have failed to plug all the holes after the U.S. Department of Homeland Security warned last year that Russian cyber operatives had downloaded voter records from Illinois’ database in advance of the 2016 presidential election and attempted to do so in 20 other states.” Washington Secretary of State Kim Whyman assures voters systems are secure Washington Secretary of State Kim Whyman was keen to stress that the Washington electoral infrastructure is secure. In a statement on her website, she said “voters can rest assured that Washington’s election system is secure.” It was only in May that the Senate Intelligence Committee alleged that in “a small number of states,” cyberattackers affiliated with the Russian government “were in a position to” alter or delete voter registration information during the 2016 election. As part of that report, the Committee urged “federal grant funds to improve cybersecurity by hiring additional Information Technology staff, updating software, and contracting vendors to provide cybersecurity services.” However, cybersecurity experts have been quick to pick up on vulnerabilities that still haven’t been tackled. Susan Greenhalgh, policy director for the National Election Defense Coalition, said “the gaping vulnerability found in Georgia should be sending shock waves, not just in the Georgia Secretary of State’s office, but in all the other states that are using the same technology. The vendor left a door wide open that allows an attacker, anywhere in the world, to execute a voter suppression operation using election technology.” The vendor who installed Georgia’s computer programming has been identified as PCC Technologies, at the time a Connecticut-based firm. Cyber experts examined four states’ registration sites for McClatchy, including North Carolina and Washington, because PCC had listed them alongside 15 other states for whom it had performed work. Officials in both Washington and North Carolina said PCC did not program their voter registration databases, but the cyber experts said they still could see vulnerabilities. According to The Seattle times, “[cybersecurity experts] said hackers could get around authentication requirements in the voter registration system for Washington’s statewide vote-by-mail operation.” This would mean that “if data were deleted, the affected voters would not be mailed ballots, creating significant challenges, especially if the voter failed to act before Election Day.” Georgia’s online registration system is out of date, cybersecurity expert claims Harri Hursti, a New York-based cybersecurity expert who monitored Georgia’s election on Tuesday, said the design of its online registration system was acceptable 15 years ago. But today, he said, it would violate “every single manual” because it exposes “critical information” to any viewer. Erich Ebel, a spokesperson for Kim Wyman, said “the state has a very robust election security protocol, both physical and electronic. Our firewalls are state-of-the-art, and we have a number of other measures in place to identify, block and report suspicious activity”. “Bernhard and a prominent cyber expert who evaluated Washington’s security on condition of anonymity said there’s still a way for a bad actor to manipulate the system.”, according to Seattle Times. A group of computer geeks created a website named Highprogrammer.com, which can easily obtain driver’s licenses for residents of Washington and a number of other states to show how easily systems can be breached. Patrick Gannon, a spokesman for North Carolina’s elections board, also acknowledged that a North Carolina law makes state’s voter registration data widely available. This includes personal information such as ages and addresses and could allow anyone to pluck names off the list, fill out a form and mail fake address changes to state or county officials.
Read more
  • 0
  • 0
  • 11260

article-image-version-1-29-of-visual-studio-code-is-now-available
Amrata Joshi
13 Nov 2018
3 min read
Save for later

Version 1.29 of Visual Studio Code is now available

Amrata Joshi
13 Nov 2018
3 min read
Visual Studio Code 1.29 was released yesterday - this was the October update of Microsoft’s planned monthly updates. This update to the code editor includes multiline search, and improved support for macOS. Features of Visual Studio Code 1.29 Multiline search Visual Studio Code now supports multiline search. A regex search executes in multiline mode only if it contains a \n literal. The search view pops up a hint next to each multiline match. The ripgrep tool helps in implementing multiline search. macOS full-screen support To enable full-screen mode for Visual Studio Code, window.nativeFullScreen is set to false. Visual Studio 1.29 has an advantage of entering full-screen mode without creating a macOS space on the desktop. By default, Visual Studio Code uses macOS native full screen. Highlight modified tabs Visual Studio Code 1.29 comes with a new setting workbench.editor.highlightModifiedTabs.  Whenever the editor has unsaved changes, then this new setting displays a thick border at the top of editor tabs. It makes easier to find files that need to be saved. Even the color of the border can be customized. File and folder icons in IntelliSense The IntelliSense widget is now updated. It shows file and folder icons for file completions based on the File Icon theme. This provides a unique look which helps in quickly identifying the different file types. Format Selection With Visual Studio Code 1.29, it is now possible to speed up the small formatting operations. Without an editor selection, the Format Selection command will now format the current line. Show error codes This editor of this version, now shows the error code of a problem if an error code is defined. One can check the error code at the end of the line in square brackets. Normalized extension samples The Visual Studio Code extension samples at vscode-extension-samples have been updated in this release for consistency. Each extension sample includes a uniform coding style and structure and a README that explains the sample's functionality with a short animation. It also includes a listing of the vscode API or Contribution Points used in each sample. Start debugging with a stop on entry The team at Visual Studio Code has introduced a command for Node.js debugging. The command, Debug: Start Debugging and Stop On Entry(extension.node-debug.startWithStopOnEntry) is used for debugging and immediately stopping on the entry of your program. Clear terminal before executing the task A new property called clear got added to the task presentation configuration in this release. If the clear property is set to true then it is possible to clear the terminal before the task is run. Major Bug Fixes Previously, the startDebugging method in Visual Studio Code used to return the value ‘true’ even when the build failed. This issue has been fixed in this release. In previous releases, the Settings UI never used to remember its search on reloading. But with this release, this issue has been resolved. Earlier it wasn't possible to cancel a debug session while it was initializing. But now it’s possible with Visual Studio Code 1.29. Read more on this news on the Visual Studio Code website. Visual Studio code July 2018 release, version 1.26 is out! Unit Testing in .NET Core with Visual Studio 2017 for better code quality Neuron: An all-inclusive data science extension for Visual Studio
Read more
  • 0
  • 0
  • 2677
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-jack-dorsey-discusses-the-rumored-edit-tweet-button-and-tells-users-to-stop-caring-about-followers
Natasha Mathur
13 Nov 2018
3 min read
Save for later

Jack Dorsey discusses the rumored ‘edit tweet’ button and tells users to stop caring about followers

Natasha Mathur
13 Nov 2018
3 min read
Twitter CEO Jack Dorsey, attended a town hall meeting at IIT, Delhi, yesterday, where he talked about his plans to add an “edit tweet” feature to the social media platform. He revealed he has mixed feelings about the feature, and said that he wants to ensure that it gets implemented the right way. “You have to pay attention to what are the use cases for the edit button. A lot of people want the edit button because they want to quickly fix a mistake they made. Like a misspelling or tweeting the wrong URL. That’s a lot more achievable than allowing people to edit any tweet all the way back in time," said Dorsey. He also talked about the risks that can come along with the “edit tweet” feature. They could, he pointed out, be used to change old tweets, leading to further misinformation and ‘fake news’. Dorsey conceded, however, that an edit button remains high on users’ wishlists. https://twitter.com/KimKardashian/status/1006691477471125504 Dorsey elaborated on the conversations happening within Twitter about the feature. He said, “There’s a bunch of things we could do to show a changelog and show how a tweet has been changed and we’re looking at all this stuff. We’ve been considering edit for quite some time but we have to do it in the right way. We can’t just rush it out. We can’t make something which is distracting or takes anything away from the public record”. Dorsey says follower count is ‘meaningless’ Dorsey also talked about the follower count feature, calling it“meaningless”. According to the Twitter chief, people should stop focusing on the number of followers they have and instead focus on cultivating “meaningful conversations”. Only last month, the news of Twitter planning to disable the ‘like’ button emerged, with precisely this reasoning. There appears to be a perception that the gamified elements of the platform are harming conversation. Dorsey admitted that “back then, we were not really thinking about all the dynamics that could ensue afterwards. ”Bemoaning the importance of followers, he argued that. “what is more important is the number of meaningful conversations you're having on the platform. How many times do you receive a reply?” What this means, in reality, will remain to be seen. There will be many who still see Twitter’s attitude to verification and abuse as the real issues to be tackled if the platform is to become a place for ‘meaningful conversation.’ Twitter’s CEO, Jack Dorsey’s Senate Testimony: On Twitter algorithms, platform health, role in elections and more Jack Dorsey to testify explaining Twitter algorithms before the House Energy and Commerce Committee Twitter’s trying to shed its skin to combat fake news and data scandals, says Jack Dorsey
Read more
  • 0
  • 0
  • 9433

article-image-emmanuel-macron-teams-up-with-facebook-in-a-bid-to-fight-hate-speech-on-social-media
Savia Lobo
13 Nov 2018
2 min read
Save for later

Emmanuel Macron teams up with Facebook in a bid to fight hate speech on social media

Savia Lobo
13 Nov 2018
2 min read
Yesterday, Emmanuel Macron announced in a speech at the Forum on Internet Governance that the French government will establish a joint working group with Facebook. This means that Facebook will allow French regulators inside the company to examine how it combats online hate speech. This collaboration is a result of Macron’s trial project called “smart regulation”, which he intended to extend to other tech leaders such as Google, Apple, and Amazon at the Tech for Good Summit held in May, this year. This six-month experiment starting in early 2019 will allow representatives of the French authorities to access the tools, methods, and staff of the social network responsible for hunting racist and anti-Semitic content, homophobic or sexist and determine if Facebook’s checks on these issues could be improved. Mr. Macron said, “It's a first. And a very innovative experimental approach, which illustrates the cooperative method that I advocate.” According to TechCrunch, “the regulators will look at multiple steps such as how flagging works, how Facebook identifies problematic content, how Facebook decides if it’s problematic or not and what happens when Facebook takes down a post, a video or an image”. “It is unclear whether the group will have access to highly-sensitive material such as Facebook’s algorithms or codes to remove hate speech”, according to Reuters report. Nick Clegg, the former British deputy prime minister who is now head of Facebook’s global affairs said, “The best way to ensure that any regulation is smart and works for people is by governments, regulators and businesses working together to learn from each other and explore ideas.” Regulators could introduce widespread regulation without consulting the company. But this process should lead to fine-grained regulation. To know more about this news in detail, head over to TechCrunch and Reuter’s full coverage. Following Google, Facebook changes its forced arbitration policy for sexual harassment claims Facebook GEneral Matrix Multiplication (FBGEMM), high performance kernel library, open sourced, to run deep learning models efficiently A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News
Read more
  • 0
  • 0
  • 12629

article-image-mozilla-introduces-new-firefox-test-pilot-experiments-price-wise-and-email-tabs
Amrata Joshi
13 Nov 2018
2 min read
Save for later

Mozilla introduces new Firefox Test Pilot experiments: Price Wise and Email tabs

Amrata Joshi
13 Nov 2018
2 min read
Test Pilot is an important part of Mozilla Firefox, and allows Mozilla to test out new features and tools that aim to improve the experience of Firefox users. Yesterday, the organization launched two new Test Pilot projects: Price Wise and Email Tabs. Price Wise allows users to track the price of items online, while Email Tabs makes it easier for people to share links via email. How Price Wise worksEssentially, Price Wise is a price-tracking tool. It allows users to add certain products to a watch list; Price Wise will send notifications when there are changes in price. The extension only works for eBay, Best Buy, Amazon, Walmart, and Home Depot, but there are apparently plans to extend its usage to other retailers and eCommerce sites. As holiday season is approaching, it makes sense for Mozilla to push it out to users. You can try it out here. How Email Tabs works Email Tabs is a tool which helps users to send links via email. Typically, you’d need to copy and paste links into your email, but with Email Tabs, you can share from a whole list of tabs. But that’s not all. Users can also choose how the content should be presented in the email. So, it could be a simple link, a screenshot, or even the full text. At the moment this only works with Gmail, but like Price Wise, Mozilla is looking to extend the roll out. You can try Email Tabs here. Both experiments are available for anybody who is signed up to the Test Pilot program. https://youtu.be/UpRLjTQmkW4 Mozilla previews Send, Color and Side View Mozilla also previewed other experiments that are due for release this year. Send allows you to encrypt and share large files up to 1GB, Color, allows users to customize the look of Firefox, while Side View makes comparison shopping easier, as one can look at two products without having to switch back and forth between two separate web pages. To learn more, visit the Firefox website of Firefox. Mozilla shares how AV1, the new the open source royalty-free video codec, works Mozilla announces WebRender, the experimental renderer for Servo, is now in beta Mozilla funds winners of the 2018 Creative Media Awards for highlighting unintended consequences of AI in society
Read more
  • 0
  • 0
  • 10062

article-image-researchers-show-that-randomly-initialized-gradient-descent-can-achieve-zero-training-loss-in-deep-learning
Bhagyashree R
13 Nov 2018
2 min read
Save for later

Researchers show that randomly initialized gradient descent can achieve zero training loss in deep learning

Bhagyashree R
13 Nov 2018
2 min read
Yesterday, researchers from Carnegie Mellon University, University of Southern California, Peking University, and Massachusetts Institute of Technology published a paper on a big optimization problem in deep learning. This study proves that randomly initialized gradient descent can achieve zero training loss in polynomial time for a deep over-parameterized neural network with residual connections (ResNet). The key idea is to show that the Gram matrix is increasingly stable under overparameterization, and so every step of gradient descent decreases the loss at a geometric rate. What is this study is based on? This study builds on two ideas from previous works on gradient descent for two-layer neural networks: The researchers analyzed the dynamics of the predictions whose convergence is determined by the least eigenvalue of the Gram matrix induced by the neural network architecture. And to lower bound the least eigenvalue, it is sufficient to bound the distance of each weight matrix from its initialization. The second base concept is the observation by Li and Liang, which states that if the neural network is overparameterized, every weight matrix is close to its initialization. What are the key observations made in this study? This study focuses on the least squares loss and assumes the activation is Lipschitz and smooth. Consider that there are n data points and the neural network has H layers with width m. The following are the aims this study tries to prove: Fully-connected feedforward network: If m = Ω poly(n)2O(H)1, then randomly initialized gradient descent converges to zero training loss at a linear rate. ResNet architecture: If m = Ω (poly(n, H)), then randomly initialized gradient descent converges to zero training loss at a linear rate. When compared with the first result, the dependence on the number of layers improves exponentially for ResNet. This theory demonstrates the advantage of using residual connections. Convolutional ResNet: The same technique is used to analyze the convolutional ResNet. If m = poly(n, p, H) where p is the number of patches, then randomly initialized gradient descent achieves zero training loss. To learn more, you can, read the full paper: Gradient Descent Finds Global Minima of Deep Neural Networks. OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners Facebook open sources QNNPACK, a library for optimized mobile deep learning Top 5 Deep Learning Architectures
Read more
  • 0
  • 0
  • 10361
article-image-the-ceph-foundation-has-been-launched-by-the-linux-foundation-to-support-the-open-source-storage-project
Melisha Dsouza
13 Nov 2018
3 min read
Save for later

The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project

Melisha Dsouza
13 Nov 2018
3 min read
At Ceph Day Berlin, yesterday (November 12)  the Linux Foundation announced the launch of the Ceph Foundation. A total of 31 organizations have come together to launch the Ceph Foundation including industries like ARM, Intel, Harvard and many more. The foundation aims to bring industry members together to support the Ceph open source community. What is Ceph? Ceph is an open source distributed storage technology that provides storage services for many of the world’s largest container and OpenStack deployments. The range of organizations using Ceph is vast. They include financial institutions like Bloomberg and Fidelity, cloud service providers like Rackspace and Linode, car manufacturers like BMW, and software firms like SAP and Salesforce. The main aim of the Ceph Foundation The main focus of the foundation is to raise money via annual membership fees from industry members. The combined pool of funds will then be spent in support of the Ceph community. The team has already raised around half a million dollars for their first year which will be used to support the Ceph project infrastructure, cloud infrastructure services, internships, and community events. The new foundation will provide a forum for community members and industry stakeholders to meet and discuss project status, development and promotional activities, community events, and strategic direction. The Ceph Foundation replaces the Ceph Advisory Board formed back in 2015. According to a Linux Foundation statement, the Ceph Foundation, will “organize and distribute financial contributions in a coordinated, vendor-neutral fashion for immediate community benefit” Ceph has an ambitious plan for new initiatives once the foundation gets properly functional. Some of these include: Expansion of and improvements to the hardware lab used to develop and test Ceph An events team to help plan various programs and targeted regional or local events Investment in strategic integrations with other projects and ecosystems Programs around interoperability between Ceph-based products and services Internships, training materials, and much more! The Ceph Foundation will provide an open, collaborative, and neutral home for project stakeholders to coordinate their development and community investments in the Ceph ecosystem. You can head over to their blog to know more about this news. Facebook’s GraphQL moved to a new GraphQL Foundation, backed by The Linux Foundation NIPS Foundation decides against name change as poll finds it an unpopular superficial move; instead increases ‘focus on diversity and inclusivity initiatives’ Node.js and JS Foundation announce intent to merge; developers have mixed feelings
Read more
  • 0
  • 0
  • 12760

article-image-day-1-of-chrome-dev-summit-2018-new-announcements-and-googles-initiative-to-close-the-gap-between-web-and-native
Sugandha Lahoti
13 Nov 2018
4 min read
Save for later

Day 1 of Chrome Dev Summit 2018: new announcements and Google’s initiative to close the gap between web and native

Sugandha Lahoti
13 Nov 2018
4 min read
The 6th Chrome Dev Summit 2018 is being hosted on the 12th and 13th of this month in San Francisco. Yesterday, Day 1 of the summit was opened by Ben Galbraith, the director of Chrome, to talk about “the web platform’s latest advancements and the evolving landscape.” Leading web developers described their modern web experiences as well. Major Chrome Dev Summit 2018 announcements included web.dev, a new developer resource website, and a demonstration of VisBug, a browser-based visual development tool. The summit also included a demo of a new web tool called Squoosh that can downsize, compress, and reformat images. The Chrome Dev Summit 2018 also highlighted some of the browser APIs currently in development, including Web Share Target, Wake Lock, WebHID and more. It also featured a Writable File API currently under development, which would allow web apps to edit local files. New web-based tools and resources web.dev The web.dev resource website provides an aggregation of information for modern Web APIs. It helps users monitor their sites over time to ensure that they can keep their site fast, resilient and accessible. web.dev is created in partnership with Glitch, and has a deep integration with Google’s Lighthouse tool. VisBug Another developer tool VisBug helps developers easily edit a web page using a simple point-and-click and drag and drop interface. This is an improvement over Firebug, Google’s previous tool, which used the website’s source code. VisBug is currently available as a Chrome extension that can be installed from the main Chrome Web Store. Squoosh The Squoosh tool allows you to encode images using best-in-class codecs like MozJPEG, WebP, and OptiPNG. It works cross-browser and offline, and ALL codecs supported even in a browser with no native support using WASM. The app is able to do 1:1 visual comparison of the original image and its compressed counterpart, to help users understand the pros and cons of each format. Closing the gap between web and native Google is also taking initiatives to close the gap between the web and native and make it easy for developers to build great experiences on the open web. Regarding this, Chrome will work with other browser vendors to ensure interoperability and get early developer feedback. Proposals will be submitted to the W3C Web Incubator Community Group for feedback. According to Google, this open development process will be “no different than how we develop every other web platform feature.” The first initiative in this aspect is the writable files API. The Writable Files API Currently, under development, the writable files API is designed to increase the interoperability of web applications with native applications. Users can choose files or directories that a web app can interact with on the native file system. They don’t have to use a native wrapper like Electron to ship their web app. With the Writable Files API, users can create a simple, single file editor that opens a file, allows the user to edit it, and save the changes back to the same file. People were surprised that it was Google who jumped on this process rather than Mozilla which has already implemented version of a lot of these APIs. A hacker news user said, “I guess maybe not having that skin in the game anymore prevented those APIs from becoming standardized? But these are also very useful for desktop applications. Anyways, this is a great initiative, it's about time a real effort was made to close that gap.” Here’s a video playlist of all the Chrome Dev Summit sessions so far. Tune into Google’s livestream to follow the rest of the sessions of the day and watch this space for more exciting announcements. Meet Carlo, a web rendering surface for Node applications by the Google Chrome team. Google Chrome 70 now supports WebAssembly threads to build multi-threaded web applications. #GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment
Read more
  • 0
  • 0
  • 13560

article-image-uk-researchers-have-developed-a-new-pytorch-framework-for-preserving-privacy-in-deep-learning
Prasad Ramesh
13 Nov 2018
3 min read
Save for later

UK researchers have developed a new PyTorch framework for preserving privacy in deep learning

Prasad Ramesh
13 Nov 2018
3 min read
UK professors and researchers have developed the first general framework for safeguarding privacy in deep learning built over Pytorch. They have reported their findings in the paper “A generic framework for privacy preserving deep learning.” Using constructs that preserve privacy This paper introduces a transparent framework to preserve privacy while using deep learning in PyTorch. This framework puts a premium on data ownership and its securing processing. It introduces a value representation which is based on chains of commands and tensors. The resulting abstraction allows implementation of complex constructs that preserve privacy. Constructs like federated learning, secure multiparty computation, and differential privacy are used. Boston Housing and Pima Indian Diabetes datasets are used in the paper to show early results. Except for differential privacy, other privacy features do not affect prediction accuracy. The current framework implementation introduces a significant overhead which is to be addressed in a later development stage. Deep learning operations in untrusted environments To perform operations in untrusted environments without disclosing data, Secure Multiparty Computation (SMPC) is used, which is a popular approach. In machine learning, SMPC can protect the model weights while allowing multiple worker nodes to participate in training with their own datasets. This is known as federated learning (FL). These securely trained models are still vulnerable to reverse engineering attacks. This vulnerability is addressed by differentially private (DP) methods. The standardized PyTorch framework contains: A chain structure in which performing transformations or sending tensors to other workers can be shown as a chain of operations. For a virtual to real context of federated learning, a concept called Virtual Workers is introduced. These Virtual Workers reside in the same machine and do not communicate over the network. Results and conclusion A reasonably small overhead is observed when using Web Socket workers in place of Virtual Workers. This overhead is due to the low network latency when communication takes place between different local tabs. When using the Pima Indian Diabetes dataset, the same overhead in performance is observed. The design in this paper relies on chains of tensors exchanged between the local and remote workers. Decreasing training time is an issue to be addressed. Another concern is securing MPC to avoid malicious attempts targeted at corrupting the data or the model. For more details, read the research paper. PyTorch 1.0 preview release is production ready with torch.jit, c10d distributed library, C++ API OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning?
Read more
  • 0
  • 0
  • 14497
article-image-worldwide-outage-youtube-facebook-and-google-cloud-goes-down-affecting-thousands-of-users
Natasha Mathur
13 Nov 2018
3 min read
Save for later

Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users

Natasha Mathur
13 Nov 2018
3 min read
It looks like the ‘outage flu’ isn’t over yet. Yesterday, YouTube, Facebook, and Google Cloud all went down, affecting millions of users. This is only weeks after both GitHub and YouTube faced a worldwide outage issue that lasted for well over half an hour for users worldwide. YouTube users took to Twitter, yesterday, to vent their frustration. A range of issues were reported, from video playback to other error messages. https://twitter.com/easy_twits/status/1062111051066564608 https://twitter.com/adb071004/status/1062099393120223238 https://twitter.com/BEvansBabyGirl/status/1062108516582858754 The YouTube team hasn’t responded to the incident and outlined what they believe the cause of the outage was. So far, the team has only tweeted saying that they’re aware of the issue and have received similar reports from users. https://twitter.com/TeamYouTube/status/1062112996833591299 https://twitter.com/TeamYouTube/status/1062111896604680192 Facebook goes down Facebook also went down yesterday during the morning on the west coast of the U.S... The outage lasted for about 40 minutes. The website showed an error simply saying “Sorry, something went wrong. We're working on it and we'll get it fixed as soon as we can” to users trying to access the site. Users also faced difficulties using Facebook Messenger. The outage was a result of an issue caused due to a routine test conducted by Facebook. https://twitter.com/iSocialFanz/status/1062042249146654720 https://twitter.com/laststand1507/status/1062046772011384832 https://twitter.com/femiredwood/status/1062049127209541632 Problems on Google Cloud Google cloud also faced the same fate as Youtube and Facebook as it went down yesterday afternoon at around 1:12 PM PT and ended at 2:35 PM PT. Users faced connectivity issues when using a number of Google services including  Google APIs, load balancers, and other external IP addresses. The Google support team posted an update on the outage on Google Cloud status dashboard. “Throughout the duration of this issue Google services were operating as expected and we believe the root cause of the issue was external to Google. We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence”. Google also tweeted back to users who were complaining about the outage. https://twitter.com/GCPcloud/status/1062112427523870720 https://twitter.com/eddiepluswang/status/1062115371107446784 GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage Facebook GEneral Matrix Multiplication (FBGEMM), high performance kernel library, open sourced, to run deep learning models efficiently
Read more
  • 0
  • 0
  • 14538

article-image-basecamp-3-faces-a-read-only-outage-of-nearly-5-hours
Bhagyashree R
13 Nov 2018
3 min read
Save for later

Basecamp 3 faces a read-only outage of nearly 5 hours

Bhagyashree R
13 Nov 2018
3 min read
Yesterday, Basecamp shared the cause behind the outage Basecamp 3 faced on November 8. The outage continued for nearly five hours starting from 7:21 am CST to 12:11 pm. Due to this, the users were only able to access existing messages, to-do lists, and files, but they were prevented from entering any new information and altering any existing information. David Heinemeier Hansson, the creator of Ruby on Rails, founder & CTO at Basecamp said in his post that this was the worst outage Basecamp has faced in probably 10 years: “It’s bad enough that we had the worst outage at Basecamp in probably 10 years, but to know that it was avoidable is hard to swallow. And I cannot express my apologies clearly or deeply enough.” https://twitter.com/basecamp/status/1060554610241224705 Key causes behind the Basecamp 3 outage Every activity that a user does is tracked in Basecamp’s events table, whether it is posting a message, updating a to-do list, or applauding a comment. The root cause behind the Basecamp going into read-only mode was its database hitting the ceiling of 2,147,483,647 on this very busy events table. Secondly, the programming framework that Basecamp uses, Ruby on Rails updated their default for database tables in version 5.1 released in 2017. This update lifted the headroom for records from 2,147,483,647 to 9,223,372,036,854,775,807 on all tables. But, the column in the database was configured as an integer rather than a big integer. The complete timeline of the outage Time Activity 7:21 am CST   They ran out of ID numbers on the events table in the database because the column in the database was configured as an integer rather than a big integer. The integer runs out of numbers at 2147483647 and big integer can grow until 9223372036854775807. 7:29 am CST The team started working on database migration where they updated the column type from the regular integer to the big integer type. They later tested this fix on a staging database to make sure it was safe. 7:52 am CST The test done on the staging database verified that the fix was correct, so they moved on to make the changes to the production database table. Due to the huge size of the production database, the migration was estimated to take about one hour and forty minutes. 10:56 am CST-11:52 am CST The upgrade to the database was completed, but still, verification of all the data, and configurations update was required to ensure no other problems are faced when it is back online. 12:22 pm CST After the successful verification, Basecamp came back online. 12:33 pm CST Basecamp went down again because of the intense load of the application was back online, which caused the caching server to get overwhelmed. 12:41 pm CST Basecamp came back online after they switched over to the backup caching servers. To read the entire update on Basecamp’s outage, check out David Heinemeier Hansson’s post on Medium. GitHub October 21st outage RCA: How prioritizing ‘data integrity’ launched a series of unfortunate events that led to a day-long outage Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Azure DevOps outage root cause analysis starring greedy threads and rogue scale units
Read more
  • 0
  • 0
  • 14862
Modal Close icon
Modal Close icon