Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-ahead-of-eus-vote-on-new-copyright-rules-eff-releases-5-key-principles-to-guide-copyright-policy
Sugandha Lahoti
15 Jan 2019
3 min read
Save for later

Ahead of EU's vote on new copyright rules, EFF releases 5 key principles to guide copyright policy

Sugandha Lahoti
15 Jan 2019
3 min read
The Electronic frontier foundation is taking part in copyright week. Their motto, “Copyright should encourage more speech, not act as a legal cudgel to silence it.” According to EFF, copyright law often belongs in a majority to the media and entertainment industries, with little to no effect on other domains. Following this, EFF has teamed with other organizations to participate in Copyright Week. They talk about five copyright issues which can help build a set of principles for the copyright law. Participating organizations for this year include Association of research libraries, Authors Alliance, Copyright for creativity, Disco, Ifixit, Rstreet, Techdirt, and Wikimedia. For the year 2019, they have highlighted five issues and the whole week they will be releasing blog posts and actions on these issues on their blog and on Twitter. EFF’s copyright issues for this year: [box type="shadow" align="" class="" width=""] Copyright as a Tool of Censorship Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it. Device and Digital Ownership As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it–meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way. Public Domain and Creativity Copyright policy should encourage creativity, not hamper it. Excessive copyright terms inhibit our ability to comment, criticize, and rework our common culture. Safe Harbors Safe harbor protections allow online intermediaries to foster public discourse and creativity. Safe harbor status should be easy for intermediaries of all sizes to attain and maintain. Filters Whether as a result of corporate pressure or regulation, over-reliance on automated filters to patrol copyright infringement presents a danger to free expression on the Internet.[/box] This month EU is all set to vote on new copyright rules. These new copyright laws have received major opposition from Europeans. Per EFF, the Articles 11 and 13, also known as the “censorship machines” rule and the “link tax” rule, have the power to crush small European tech startups and expose half a billion Europeans to mass, unaccountable algorithmic censorship. Per the Article 13 of the law, online platforms would be required to use algorithmic filters to unilaterally determine whether content anyone uploads, from social media posts to videos, infringes copyright, and would penalize companies that allow a user to infringe copyright, but not companies that overblock and censor their users. The outcome will be censorship of massive proportions. The Directive is now in the hands of the European member-states. EFF urges people from Sweden, Germany, Luxembourg, and Poland to contact their ministers to convey their concern about Article 13 and 11. Reddit takes stands against the EU copyright directives; greets EU redditors with ‘warning box’ GitHub updates developers and policymakers on EU copyright Directive at Brussels What the EU Copyright Directive means for developers – and what you can do
Read more
  • 0
  • 0
  • 6286

article-image-llvm-officially-migrating-to-github-from-apache-svn
Prasad Ramesh
14 Jan 2019
2 min read
Save for later

LLVM officially migrating to GitHub from Apache SVN

Prasad Ramesh
14 Jan 2019
2 min read
In October last year, it was reported that LLVM (Low-Level Virtual Machine) is moving from Apache Subversion (SVN) to GitHub. Now the migration is complete and LLVM is available on GitHub. This transition was long under discussion and is now officially complete. LLVM is a toolkit for creating compilers, optimizers, and runtime environments. This migration comes in place as continuous integration is sometimes broken in LLVM because the SVN server was down. They migrated to GitHub for services lacking in SVN such as better 24/7 stability, disk space, code browsing, forking etc. GitHub is also used by most of the LLVM community. There already were unofficial mirrors on GitHub before this official migration. Last week, James Y Knight from the LLVM team wrote to a mailing list: “The new official monorepo is published to LLVM's GitHub organization, at: https://github.com/llvm/llvm-project. At this point, the repository should be considered stable -- there won't be any more rewrites which invalidate commit hashes (barring some _REALLY_ good reason...)” Along with LLVM, this monorepo also hosts Clang, LLD, Compiler-RT, and other LLVM sub-projects. Commits are being made to the LLVM GitHub repository even at the time of writing and the repo currently has about 200 stars. Updated workflow documents and instructions on migrating user work that is in-progress are being drafted and will be available soon. This move was initiated after positive responses from LLVM community members to migrate to GitHub. If you want to be up to date with more details, you can follow the LLVM mailing list. LLVM will be relicensing under Apache 2.0 start of next year A libre GPU effort based on RISC-V, Rust, LLVM and Vulkan by the developer of an earth-friendly computer LLVM 7.0.0 released with improved optimization and new tools for monitoring
Read more
  • 0
  • 0
  • 28351

article-image-netbeans-intellij-idea-and-pycharm-come-to-haiku-os
Prasad Ramesh
14 Jan 2019
2 min read
Save for later

Netbeans, Intellij IDEA and PyCharm come to Haiku OS

Prasad Ramesh
14 Jan 2019
2 min read
Last week three IDEs were ported to Haiku OS. Now, Haiku OS users can build applications with Netbeans, Intellij IDEA, and PyCharm. Haiku is an offspring of BeOS which was created by an ex-Apple executive, Jean Luis Gassee. Haiku’s development began in 2001, the first beta was released in September 2018. It is a single user system targeted specifically for personal computing. It uses a custom kernel, a fully threaded design, and a cohesive interface. Haiku houses the progressive concepts from BeOS and delivers it in a free and open source package. Now, let’s look at the package ports to Haiku. Haiko OS users can now run both Netbeans and IntelliJ IDEA with the OpenJDK8 x86_64 port. This is not in the depot yet, only in the haikuports recipe. Moreover, due to the efforts of another community member, a package for PyCharm Community Edition 2018.3 is also available. There are some minor issues and users have to work around a few settings to get things working. The addition of NetBeans IDE 8.2 and Intellij IDEA Community Edition 2018.3 to Haiku OS has many of its users excited. A comment on Hacker news says: “That's a really impressive achievement I think, those are complex applications running on complex stacks. It's certainly a big step in the direction of making Haiku a system that a developer could plausibly run for the development of cross-platform applications. This coupled with the Libre Office port last year means there's a pretty strong selection of applications for it cropping up.” Note that, by default Haiku comes with the dying Python 2.7 and the next major version, Python 3 can be installed via the package manager. To keep an eye on updates of these IDEs, head over to the Haiku forums. Haiku beta released with package management, a new preflet, webkit and more The Haiku operating system has released R1/beta1 Haiku, the open source BeOS clone, to release in beta after 17 years of development
Read more
  • 0
  • 0
  • 13094

article-image-facebook-twitter-and-other-tech-giants-to-fight-against-indias-new-intermediary-guidelines-reuters-reports
Melisha Dsouza
14 Jan 2019
4 min read
Save for later

Facebook, Twitter, and other tech giants to fight against India’s new “intermediary guidelines” Reuters reports

Melisha Dsouza
14 Jan 2019
4 min read
According to a report by Reuters released later last month, The Indian Information Technology ministry has proposed rules that will compel major technology giants like Facebook, Whatsapp, Twitter etc to take down unlawful content affecting the “sovereignty and integrity of India”. According to the rules, this content will have to be taken down within 24 hours of being notified by a court or a government body. These rules are proposed with an aim to achieve the goal of ‘a safer social media’. The proposal drafted by the ministry is open for public comment until 31st January 2019; after which it will be adopted as law, either ‘with or without changes’. Now, Reuters report that sources familiar with the matter have revealed that the tech giants are all set to fight against these rules that regulate content in India. The country is one of the world’s biggest Internet market with about 300 million Facebook users, more than 200 million Whatsapp users, and millions of Twitter users as well. Reuters also reports that many U.S. and Indian lobby groups representing these top tech companies have started seeking legal opinions on the impact of these rules. They have also been advised by law firms on drafting objections against these rules to be filed with the IT ministry. According to the Ministry of Electronics and Information Technology, the draft Intermediary Guidelines will  “curb misuse of Social Media for mob lynching and other violence”. Last year, fake messages about child traffickers and kidnappers circulated through WhatsApp sparked mob lynchings in India. Mozilla Corp. called this proposal “a blunt and disproportionate” solution to regulating harmful online content. The company also added that these rules could lead to the problem of over censorship of online content. Joint secretary at India’s I.T. ministry, Gopalakrishnan S, said that the proposal would ‘make social media safer’ and ‘not curb freedom of speech’. Industrial executives and civil rights activists agree otherwise. They state that these rules could be used by the government of Prime Minister Narendra Modi to increase surveillance on the public, given that this proposal comes just ahead of India’s national election to be held in May. Sources also express their concern to Reuters that the rules will put the privacy of users at stake with round the clock monitoring of online content. This is because the rules require companies with more than 5 million Indian users to have a local office and a nodal officer for 24x7 coordination with law enforcement. The rules also mandate that on being questioned by the government, companies need to reveal the origin of a message; thus questioning user confidentiality on platforms like Whatsapp that uses end to end encryption to protect user privacy. Twitter was abuzz with mixed sentiments. While some did support the motive of banishing fake news and misinformation on the internet, others were concerned about targeted surveillance. https://twitter.com/akhileshsharma1/status/1081499612698083328 https://twitter.com/subhapa/status/1083240653272825856   https://twitter.com/subhapa/status/1083256991156453377 While the rules come just in time to prevent malicious actors from misusing social media platforms to spread fake news and sway away voters, we cannot help but notice the strict impositions that tech giants will have to face if this draft becomes law. You can head over to Reuters for the entire coverage of this news. US government privately advised by top Amazon executive on web portal worth billions to the Amazon; The Guardian reports Australia passes a rushed anti-encryption bill “to make Australians safe”; experts find “dangerous loopholes” that compromise online privacy and safety Australia’s Assistance and Access (A&A) bill, popularly known as the anti-encryption law, opposed by many including the tech community  
Read more
  • 0
  • 0
  • 1942

article-image-google-bids-farewell-to-its-audio-dongle-chromecast-audio
Amrata Joshi
14 Jan 2019
3 min read
Save for later

Google bids farewell to its audio dongle, Chromecast Audio

Amrata Joshi
14 Jan 2019
3 min read
Last week, Google decided to stop manufacturing its audio dongle, Chromecast Audio that allowed users to connect speakers to their Google cast setup since the company has a variety of new audio products for users.The remaining stock of the Chromecast Audio is being sold for $15 instead of $35. The Chromecast Audio dongle is designed such that it could be plugged to a regular speaker via a 3.5 mm audio cable. The device works smoothly as it can get audio from plenty of apps at a louder volume without resorting to Bluetooth. In 2015, Chromecast Audio was launched along with the second-generation Chromecast. Over the years, the Chromecast Audio evolved and also featured multi-room support. Google will still support its Chromecast Audio users for the time being. In a statement to TechCrunch, Google said, “Our product portfolio continues to evolve, and now we have a variety of products for users to enjoy audio. We have therefore stopped manufacturing our Chromecast Audio products. We will continue to offer assistance for Chromecast Audio devices, so users can continue to enjoy their music, podcasts and more.” It seems Google is more inclined towards getting people to purchase its home products, Google Assistant or cast enabled speakers from its partners. Users are giving mixed reviews to this news. Few users are now scared to invest in Google products as they think, Google often cans its products. One of the users, commented on HackerNews, “Google is really developing a reputation for starting and canning projects. I’d recommend not getting too invested in their products when possible.” Google has previously shut down a lot of services in recent years, with the latest one being the ‘Inbox’ which will shut in March, this year. Users were also unhappy when Google decided to discontinue Google Reader in 2013. But this somewhere hints Google’s tendency of shutting down its popular products. One of the comments on HackerNews, reads, “Google Reader was damn useful and is a poster child of Google's habit of hyping up useful products and then canning them.”  Few users are still in support of Google and its decision. One of the users commented, “I don't know who out there is heavily invested in a $35 audio dongle. I love mine, but it still works just as well today as it did yesterday and not being able to order more isn't causing me any anxiety.” Another user commented, “The 3-something year old hardware dongle is no longer being made, that's it. That's the entirety of the news. The Cast project as a whole is not being canned. Cast-enabled speakers, receivers, etc... are all still widely available from a wide number of manufacturers, that's not changing.” TLS comes to Google public DNS with support for DNS-over-TLS connections Researchers release unCaptcha2, a tool that uses Google’s speech-to-text API to bypass the reCAPTCHA audio challenge Google’s secret Operating System ‘Fuchsia’ will run Android Applications: 9to5Google Report
Read more
  • 0
  • 0
  • 5153

article-image-metasploit-5-0-released
Savia Lobo
14 Jan 2019
3 min read
Save for later

Metasploit 5.0 released!

Savia Lobo
14 Jan 2019
3 min read
Last week, the Metasploit team announced the release of its fifth version, Metasploit 5.0. This latest update introduces multiple new features including Metasploit’s new database and automation APIs, evasion modules and libraries, expanded language support, improved performance, and more. Metasploit 5.0 includes support for three different module languages; Go, Python, and Ruby. What’s New in Metasploit 5.0? Database as a RESTful service The latest Metasploit 5.0 now adds the ability to run the database by itself as a RESTful service on top of the existing PostgreSQL database backend from the 4.x versions. With this, multiple Metasploit consoles can easily interact. This change also offloads some bulk operations to the database service, which improves performance by allowing parallel processing of the database and regular msfconsole operations. New JSON-RPC API This new API will be beneficial for users who want to integrate Metasploit with new tools and languages. Till now, Metasploit supported automation via its own unique network protocol, which made it difficult to test or debug using standard tools like ‘curl’. A new common web service framework Metasploit 5.0 also adds a common web service framework to expose both the database and the automation APIs; this framework supports advanced authentication and concurrent operations and paves the way for future services. New evasion modules and libraries The Metasploit team announced a new evasion module type in Metasploit along with a couple of example modules in 2008. Using these module types, users can easily develop their own evasions and also add a set of convenient libraries that developers can use to add new on-the-fly mutations to payloads. A recent module uses these evasion libraries to generate unique persistent services. With Metasploit 5.0’s generation libraries, users can now write shellcode in C. Execution of an exploit module The ability to execute an exploit module against more than one target at a given point of time was a long-requested feature. Usage of the exploit module was limited to only one host at a time, which means any attempt at mass exploitation required writing a script or manual interaction. With Metasploit 5.0, any module can now target multiple hosts in the same way by setting RHOSTS to a range of IPs or referencing a hosts file with the file:// option. Improved search mechanism With a new improved search mechanism, Metasploit’s slow search has been upgraded and it now starts much faster out of the box. This means that searching for modules is always fast, regardless of how you use Metasploit. In addition, modules have gained a lot of new metadata capabilities. New metashell feature The new metashell feature allows users to background sessions with the background command, upload/download files, or even run resource scripts, all without needing to upgrade to a Meterpreter session first. As backward compatibility, Metasploit 5.0 still supports running with just a local database, or with no database at all. It also supports the original MessagePack-based RPC protocol. To know more about this news in detail, read its release notes on GitHub. Weaponizing PowerShell with Metasploit and how to defend against PowerShell attacks [Tutorial] Pentest tool in focus: Metasploit Getting Started with Metasploitable2 and Kali Linux
Read more
  • 0
  • 0
  • 24282
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-the-ember-project-announces-version-3-7-of-ember-js-ember-data-and-ember-cli
Bhagyashree R
14 Jan 2019
2 min read
Save for later

The Ember project announces version 3.7 of Ember.js, Ember Data, and Ember CLI

Bhagyashree R
14 Jan 2019
2 min read
After releasing Ember 3.6 last month, the team behind the Ember project released version 3.7 of Ember.js, Ember Data, and Ember CLI, last week. As always, Ember 3.7 embarks the start of 3.8 beta cycle for all the subprojects. This version drops support for Babel 6 and Node 4, along with a few bug fixes and performance improvements. There are no changes in the Ember Data subproject. Updates in Ember.js 3.7 In Ember.js 3.7, the support for Node 4 has been dropped explicitly and if you want to upgrade to this version you need to first upgrade your Node version. Also, Node 6 support is planned to end in the next few months. Updates in Ember CLI The last usage of Babel 6 removed: The last usage of Babel 6 is removed in Ember CLI 3.7. Babel 6 was used for supporting compiling templates in addon/. It was also used for supporting addon-test-support/ in the addons that do not have any .js processors. Since the module compilation between Babel 6 and Babel 7 is compatible, this update is not a breaking change. Compatibility section in addon README: Another update is a Compatibility section in addon README. Whenever a new addon is generated using Ember CLI, a README file is also generated for the addon. This README will now come with a compatibility section, which enables you to easily communicate to users about what are the requirements to use the addon. You can upgrade to Ember CLI using the following commands: npm install -g ember-cli-update ember-cli-update The Ember project releases version 3.5 of Ember.js, Ember Data, and Ember CLI The Ember project announces version 3.4 of Ember.js, Ember Data, and Ember CLI Ember project releases v3.2.0 of Ember.js, Ember Data, and Ember CLI
Read more
  • 0
  • 0
  • 7858

article-image-amazon-is-reportedly-building-a-video-game-streaming-service-says-information
Sugandha Lahoti
14 Jan 2019
2 min read
Save for later

Amazon is reportedly building a video game streaming service, says Information

Sugandha Lahoti
14 Jan 2019
2 min read
According to a report by Information, Amazon is developing a video game streaming service. Microsoft and Google have also previously announced similar game streaming offerings. In October, Google announced a new experimental game streaming service, namely, Project Stream. In the same month, Microsoft’s gaming chief Phil Spencer confirmed a streaming game service for any device at the E3 conference called the Project X Cloud. Amazon’s idea is to potentially bring top gaming titles to virtually anyone with a smartphone or streaming device. The service will handle all the compute-intensive calculations needed to run graphics-intensive games in the cloud. It would then stream them directly into a smart device so that gamers can get the same experience as running the titles natively on a high-end gaming system. Information says that although the Amazon gaming service isn’t likely to be launched until next year, Amazon has begun talking to games publishers about distributing their titles through its service. Most likely, this initiative would succeed considering Amazon is the biggest player in the cloud market. Amazon currently owns 32 percent of the cloud market, compared with Microsoft Azure’s 17 percent and Google Cloud’s 8 percent. These make better chances for Amazon to succeed. This would make it easier for gamers to take advantage of Amazon’s vast cloud offerings and play elaborate, robust games even on their mobile devices As the Information noted, a successful streaming platform may possibly overcome the long-standing business model of the gaming world, in which customers pay out $50 to $60 for a Triple-A title. Amazon is yet to shell out the details of such a video gaming service officially. Check out the full report on The Information. Microsoft announces Project xCloud, a new Xbox game streaming service Now you can play Assassin’s Creed in Chrome thanks to Google’s new game streaming service. Corona Labs open sources Corona, its free and cross-platform 2D game engine
Read more
  • 0
  • 0
  • 14188

article-image-att-and-other-telcos-to-suspend-selling-customer-location-data-after-motherboards-investigation-reports-wsj
Bhagyashree R
11 Jan 2019
4 min read
Save for later

AT&T and other telcos to suspend selling customer location data after Motherboard’s investigation, reports WSJ

Bhagyashree R
11 Jan 2019
4 min read
On Thursday, AT&T said in a statement to WSJ that it will stop selling customers’ location data to third-party service following a report published by Motherboard. This recent investigation by Motherboard disclosed that how selling location information by telecom companies can eventually reach the wrong hands putting customer’s privacy and safety in danger. What Motherboard’s investigation revealed? Motherboard’s investigation revealed that many telecommunication companies like T-Mobile, AT&T, and Sprint are selling users’ real-time location data that can ultimately reach the wrong hands like stalkers or criminals. This investigation showed that mobile networks and the data they generate are not as secure as we want them to be. Telecom companies in US sell user’s location data to other companies, who are called location aggregators. These companies can then sell this information to their clients and industries. This basically forms a complex supply chain of data that shares user’s most sensitive data. And, in some cases telecommunication companies, which are the originators, may not even have any idea how this data is being used by the eventual end user. A similar scenario happened in May last year, when Sen. Ron Wyden in a letter to FCC revealed that Securus, Verizon’s indirect corporate customer had used its customer location data to effectively allow officers to spy on millions of Americans. As a reply, Verizon filed a letter saying that it is ending its data-sharing agreement with LocationSmart and Zumigo. Following that AT&T and Sprint also announced that they are cutting ties with location aggregators. How the companies reacted? AT&T in a statement said, “In light of recent reports about the misuse of location services, we have decided to eliminate all location aggregation services — even those with clear consumer benefits. We are immediately eliminating the remaining services and will be done in March.”  John Legere, T-Mobile chief executive tweeted that his organization will also completely end the location aggregator work in March: Sprint responded to WSJ, “Protecting our customers’ privacy and security is a top priority, and we are transparent about that in our Privacy Policy. We do not knowingly share personally identifiable geolocation information except with customer consent or in response to a lawful request such as a validated court order from law enforcement.” A victory for privacy advocates Sen. Ron Wyden believes that telecom companies cannot just put the blame on the third-party companies. He remarked that we need to enforce strong legislations to ensure our data is secure. He said, “Congress needs to pass strong legislation to protect Americans’ privacy and finally hold corporations accountable when they put your safety at risk by letting stalkers and criminals track your phone on the dark web.”  Sen. Kamala D. Harris called for an immediate investigation by the Federal Communications Commission (FCC) against this case. She also feels that this is a major threat to user security, “I’m extraordinarily troubled by reports of this system of repackaging and reselling location data to unregulated third party services for potentially nefarious purposes. If true, this practice represents a legitimate threat to our personal and national security.” One of the users in a MetaFilter discussion said, “The greatest thing that U.S. law needs in the age of privacy concerns is a broadening of the category referred to in the law as "innkeepers and common carriers," which is basically an olde-tyme recognition that there are sorts of private businesses that are central enough to everyday life and serve such a purpose as to need to be held to a higher duty of care than others. Cell phones, social media, and bank records all should fall under this duty of care in any sensible modern society.” FCC grants Google the approval for deploying Soli, their radar-based hand motion sensor Mozilla v. FCC: Mozilla challenges FCC’s elimination of net neutrality protection rules Spammy bots most likely influenced FCC’s decision on net neutrality repeal, says a new Stanford study
Read more
  • 0
  • 0
  • 1774

article-image-amazons-ring-gave-access-to-its-employees-to-watch-live-footage-of-the-customers-the-intercept-reports
Amrata Joshi
11 Jan 2019
5 min read
Save for later

Amazon’s Ring gave access to its employees to watch live footage of the customers, The Intercept reports

Amrata Joshi
11 Jan 2019
5 min read
According to a report by The Intercept, Ring, Amazon’s smart doorbell company gave access to its employees to watch live footage from cameras of the customers. As per the claim, Ring engineers and executives were allowed to watch the unfiltered footage of the users. Last year in February, Amazon acquired Ring for $1 billion. Amazon had been in the news last year for its data breach where the company leaked out the customers’ email addresses. Ring markets its cameras, mounted as doorbells as a security means that act like a privatized neighborhood watch while the user was away. The staff at Ring was able to gain access to the cameras inside as well as outside the home, depending on where the devices were positioned. Ring has been accused of mishandling videos collected by the smart device and failing to protect the footage with encryption. The Ring customer’s email address is enough to get access to cameras from user’s home. According to The Information and The Intercept, Ring’s video annotation team would watch camera footage and tag objects, humans and other things in the video clips so that its object recognition software could better itself. In 2016, Ring provided its Ukraine-based research and development team unfettered access to a folder on Amazon’s S3 cloud storage service that had unencrypted videos created by Ring cameras. Ring’s Neighbors app, that lets users receive real-time crime and safety alerts, doesn’t include any mention of image or facial recognition in its description. Ring’s terms of service and its privacy policy don’t mention any details about the manual video annotation being conducted by humans. Ring tried to justify that the videos weren’t shared by the company. Ring responded to this post stating, “We take the privacy and security of our customers’ personal information extremely seriously. In order to improve our service, we view and annotate certain Ring video recordings. These recordings are sourced exclusively from publicly shared Ring videos from the Neighbors app (in accordance with our terms of service), and from a small fraction of Ring users who have provided their explicit written consent to allow us to access and utilize their videos for such purposes. Ring employees do not have access to livestreams from Ring products. We have strict policies in place for all our team members. We implement systems to restrict and audit access to information. We hold our team members to a high ethical standard and anyone in violation of our policies faces discipline, including termination and potential legal and criminal penalties. In addition, we have zero tolerance for abuse of our systems and if we find bad actors who have engaged in this behavior, we will take swift action against them.” https://twitter.com/briankrebs/status/1065219981833617408 Because of the privacy concerns, users are now skeptical about using Ring’s smart doorbell. One comment on HackerNews read, “The ring doorbell is installed at your front door. It records pretty much all movement to and from your house. It records audio at the doorstep, so if you're having a conversation with anyone at your doorstep, that gets recorded too.” Another user commented, “If some rando gets my ring doorbell footage and figures out where I live, that's hard to undo. If someone steals my stuff and gets away with it because I didn't have a ring doorbell, that's annoying but much easier to recover from. We are talking about the difference between an insurance claim and moving house.” According to a few users, this device is prone to DDOS attacks. One of the users commented, “Aside from the 700 person team given access to live video feeds and customer databases, the lack of proper security of this product makes it a PRIME target for DDOS attacks that could cripple infrastructure.” But few users are in the favor of such devices as they find them safe and convenient to use. One user commented, “These devices are extremely popular in my neighborhood, and cost/convenience is the only thing keeping them from being universal.” Another user commented, “I'd say, yes. I've been able to watch that many people see the ring (they see the camera), and they back right off the porch. It's been awesome in this respect, people simply ring it less.” Some users believe such surveillance devices shouldn’t use cloud but instead have data stored locally. Others are now looking out for alternatives like Xiaomi Dafang camera, RCA doorbell camera, and Blue Iris. This news surely makes one reflect on how home appliances could get monitored by companies or hackers and personal data might get misused. Note: We have edited this news to include the response from the Ring team to our post. AWS introduces Amazon DocumentDB featuring compatibility with MongoDB, scalability and much more Amazon confirms plan to sell a HIPAA eligible software, Amazon Comprehend Medical, which will mine medical records of the patients US government privately advised by top Amazon executive on web portal worth billions to the Amazon; The Guardian reports
Read more
  • 0
  • 0
  • 15861
article-image-updates-on-scikit-learn-future-scikit-learn-0-20-talk-by-andreas-mueller
Prasad Ramesh
11 Jan 2019
4 min read
Save for later

Updates on scikit-learn future, scikit-learn 0.20 - talk by Andreas Mueller

Prasad Ramesh
11 Jan 2019
4 min read
Recently Andreas Mueller gave a talk on changes in scikit-learn 0.20 and future releases. He is an Associate Research Scientist at the Data Science Institute, University of Columbia, New York. Dr. Mueller is also a core developer of the scikit-learn library. scikit-learn is a popular machine learning library for the Python programming language. In scikit-learn the data is represented as a 2D NumPy array where a row is a sample and a column is a feature from your dataset. scikit-learn 0.20 is released as of now. Here are the highlights from the talk about the scikit-learn future. Preprocessing changes scikit-learn 0.20 aims to simplify things for user, especially preprocessing. The OneHotEnoder is rewritten to support strings Previously, the OneHotEnoder in scikit-learn only supported integers as a result of which categorical variables were encoded as strings. ColumnTransformer Another feature to help preprocessing is the ColumnTransformer. It is similar to something that previously existed called a feature union. The ColumnTransformer allows developers to apply different transformations or different preprocessing steps to different columns  in a columnar dataset. make_column_transformer makes use of the column names PowerTransformer The basic idea of the PowerTransformer is to do a power transformation that would take the data to some power. The goal is to make the data more normal. Treatment of missing values Now the scalers like StandardScaler, MinMaxScaler, RobustScaler etc allow having missing values in the data. Now you can apply the scikit-learn scalers before filling in or imputing missing values. During fitting, they all ignore the missing values. Imputer is now SimpleImputer and is a simplified version but will also add some more complex model based imputation strategies. MissingIndicator is added which allows you to record when have values been imputed often and if a value is missing it will tell you something about the data point. TransformedTargetRegressor With this, you can transform the target before building the model and after prediction. In terms of absolute error, this is much better than not using target transformation. There is also no systematic skew in the data any more or lesser than before. OpenML dataset loader This replaces ML data which was no longer maintained. OpenML allows you to create tasks on the dataset along with uploading data, you can also upload the results of a problem. Loky - a robust and reusable executor joblib is upgraded and now includes a new tool called Loky. This is an alternative to multiprocessing.pool.Pool and concurrent.features.ProcessPoolExecutor. The replacement was necessary as the old tools were not very robust. It has a deadlock free implementation and consistent spawn behavior. It also fixes the random crashes which happened previously with BLAS/OpenMP libraries. Global config for scikit-learn A global configuration now exists for scikit-learn which you can use with sklearn.config_context or sklearn.set_config either to set a global state or to use a context manager. This supports two options which are to increase speed or reduce memory consumption. Removing the check if an input is valid for large datasets saves on time by setting finite to TRUE. Setting working_memory limits RAM usage. Currently works on in distance computation and nearest neighbor computation. The options can be used like this: set_config(assume_finite=None, working_memory=None) Early stopping for gradient boosting You can stop building the model based on the tolerance and number of iterations you set. For example, the model will stop if for the last five iterations there was no improvement beyond 0.01%. There is something similar for stochastic gradient descent too. Other changes A glossary is added which explains all the terms used in scikit-learn to improve scikit-learn future uses and make the library more welcoming for new users. There are also better default parameters since it was found that most people use algorithms with default parameters. The following changes will be made in the scikit-learn future releases, until then you will receive warnings. For random forests the number of estimators will change from 10 to 100 (in version 0.22) Cross validation will be 5 fold instead of 3 (in version 0.22) In grid search iid will be set to False (in version 0.22) and iid will be removed (in version 0.24) For LogisticRegression defaults, the following changes will happen in sckit-learn 0.22: solver=’lbfgs’ from ‘liblinear’ multiclass=’auto’ from ‘ovr’ You can avoid warnings in your code by setting the parameters yourself explicitly. Python 2.7 and 3.4 support will be dropped in scikit-learn 0.21. If you want to see examples of using the new features and some other useful tips by Dr. Mueller watch the talk on YouTube. Scikit Learn 0.20.0 is here! Machine Learning in IPython with scikit-learn Why you should learn Scikit-learn
Read more
  • 0
  • 0
  • 3045

article-image-hyatt-hotels-launches-public-bug-bounty-program-with-hackerone
Natasha Mathur
11 Jan 2019
3 min read
Save for later

Hyatt Hotels launches public bug bounty program with HackerOne

Natasha Mathur
11 Jan 2019
3 min read
Hyatt Hotels Corporation launched its bug bounty program with HackerOne, earlier this week. As part of the bug bounty program, ethical hackers are invited to test Hyatt websites and apps to spot potential vulnerabilities in them. “At Hyatt, protecting guest and customer information is our top priority and launching this program represents an important step that furthers our goal of keeping our guests safe every day,” stated Hyatt Chief Information Security Officer Benjamin Vaughn. Hyatt Hotels Corporation is headquartered in Chicago and is a leading global hospitality company comprising a portfolio of 14 premier brands. Hyatt’s portfolio includes more than 750 properties in more than 55 countries across six continents. Hyatt decided to choose HackerOne bug bounty program after conducting a deep review of the bug bounty marketplace. The Bug Bounty program by HackerOne rewards friendly hackers who help discover security vulnerabilities in various important software on the internet. Hyatt is the first in the hotel industry to launch bug bounty program. “By being the first organization in the hospitality industry to embrace the collaborative efforts of global security researchers, Hyatt hopes to continue to raise its already high level of security standards as well as learn from and collaborate with security researchers”, stated the Hyatt team. The bug bounty program launched by Hyatt with Hackerone was originally available as an invite-only private program where it paid the hackers about $5600 in bounties (bug bounty rewards). This has changed as the bug bounty program is now public. Hackers are allowed to search for vulnerabilities on hyatt.com domain, www.hyatt.com,  m.hyatt.com, world.hyatt.com, and on Hyatt’s mobile apps for iOS and Android. The company will be paying hackers $4000 for spotting critical vulnerabilities, and $300 for low severity issues. The company will be rewarding hackers for tracking vulnerabilities such as novel Origin IP address discovery, authentication bypass, back-end system access via front-end systems, business logic bypass, container escape, SQL Injection, cross-site request forgery, exploitable cross-site scripting, and WAF bypass, among other issues. “Bug bounty programs are a proven method for advancing an organization’s cybersecurity defenses. In today’s connected society, vulnerabilities will always be present. Organizations like Hyatt are leading the way by taking this essential step to secure the data they are trusted to hold”, said HackerOne CEO Marten Mickos. EU to sponsor bug bounty programs for 14 open source projects from January 2019 Airtable, a Slack-like coding platform for non-techies, raises $100 million in funding The ‘Flock’ program provides grants to Aragon teams worth $1 million
Read more
  • 0
  • 0
  • 11412

article-image-researchers-build-a-deep-neural-network-that-can-detect-and-classify-arrhythmias-with-cardiologist-level-accuracy
Bhagyashree R
11 Jan 2019
2 min read
Save for later

Researchers build a deep neural network that can detect and classify arrhythmias with cardiologist-level accuracy

Bhagyashree R
11 Jan 2019
2 min read
A group of researchers from Stanford University and University of California with iRhythm Technologies Inc. and Veterans Affairs Palo Alto Health Care System have build a model that can help in the diagnosis of irregular heart rhythms, also called as arrhythmias. On Monday, the researchers shared their findings in a paper published on Springer Nature: Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Detecting arrhythmias is a pretty easy task for an expert technician or a cardiologist but is known to be quite challenging for computers. With the help of widely available ECG data and deep learning, this study aimed to improve the accuracy and scalability of automated ECG analysis. For this study, the researchers built a 34-layer deep neural network (DNN) and trained it to detect arrhythmia in arbitrary length ECG time series. The model was trained on 91,232 single-lead ECGs from 53,549 patients who used a single-lead ambulatory ECG monitoring device. The network learned to classify noise and the sinus rhythm. Additionally, it also learned to classify and segment twelve arrhythmia types present in the time series. For testing the model, the researchers used an independent test dataset annotated by a consensus committee of board-certified practicing cardiologists. The test dataset used in this study is publicly available at iRhythm Technologies’s GitHub repository. The model did pretty well by achieving an average area under the receiver operating characteristic curve (ROC) of 0.97. Another measure of accuracy was F1, which is a harmonic mean of the positive predictive value and sensitivity. F1 score of the DNN (0.837) exceeded that of average cardiologists (0.780). Researchers introduce a CNN-based system for identifying radioresistant cancer-causing cells Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US Our healthcare data is not private anymore: Study reveals that machine learning can be used to re-identify individuals from physical activity data
Read more
  • 0
  • 0
  • 12080
article-image-shareholders-sue-alphabets-board-members-for-protecting-senior-execs-accused-of-sexual-harassment
Natasha Mathur
11 Jan 2019
5 min read
Save for later

Shareholders sue Alphabet’s board members for protecting senior execs accused of sexual harassment

Natasha Mathur
11 Jan 2019
5 min read
Alphabet shareholder, James Martin, filed a lawsuit yesterday, against Alphabet’s board of directors, Larry Page, Sergey Brin, and Eric Schmidt for covering up the sexual harassment allegations against the former top execs at Google and for paying them large severance packages. As mentioned in the lawsuit, Martin sued the company for breaching its fiduciary duty to shareholders, unjust enrichment, abuse of power, and corporate waste. “The individual defendants breached their duty of loyalty and good faith by allowing the defendants to cause, or by themselves causing, the company to cover up Google’s executives’ sexual harassment, and caused Google to incur substantial damage”, reads the lawsuit. The lawsuit, filed at the San Mateo County court, San Francisco, seeks major changes to Google’s corporate governance. It calls for the non-management shareholders to nominate three new candidates for election to the board, and elimination of current dual class structure of the stock, which in turn, would take away the majority of the voting share from Page and Brin. It wants the former Google executives to repay the severance packages, benefits, and other compensation that they received from Google. Additionally, it also seeks for the Alphabet directors to pay for the punitive damages caused to Alphabet due to their engagement in corporate waste. Apart from the lawsuit filed by Martin, Alphabet’s board got hit with another lawsuit this week, on behalf of the two additional pension funds, the Northern California Pipe Trades Pension Plan and Teamsters Local 272 Labor Management Pension Fund, who own the Alphabet stock. The lawsuit makes similar allegations like the one filed by Martin, accusing Alphabet’s board members of ‘breaching their fiduciary duties by rewarding male harassers’, and ‘hiding the Google+ breach from the public’. The news of Google paying its top execs outsized exit packages first came to light back in October 2018, when the New York Times shared its investigation on sexual misconduct at Google. It alleged that Google had protected Andy Rubin, creator of Android, and Amit Singhal, ex-senior VP, Google search, among other senior execs after they were accused of sexual misconduct. Google reportedly paid $90 million as an exit package to Rubin along with a well-respected farewell. Similarly, Singhal was asked to resign in 2016 after accusations of him groping a female employee at an offsite event surfaced in 2005. As per the NY times report, Singhal received an exit package that paid him millions. However, both, Rubin and Singhal, denied the accusations. As a part of their response to Google’s handling of sexual misconduct, over 20,000 Google employees along with vendors, and contractors organized Google “walkout for real change” and walked out of their offices back in November 2018 to protest against the discrimination, racism, and sexual harassment encountered within Google. The employees laid out five demands as part of the Google walkout, including an end to forced arbitration in case of discrimination and sexual harassment for employees, among others. In response to the walkout, Google eliminated its forced arbitration policy in cases of sexual harassment, a step that was soon followed by Facebook, who also eliminated its forced arbitration policy. Sundar Pichai, CEO, Google, wrote a note that where he admitted that he’s ‘sincerely sorry’ and hopes to bring more transparency around sexual misconduct allegations. The ‘Google walkout for real change’ Medium page responded to the lawsuit today, stating that they agree with the shareholders and “anyone who enables abuse, harassment and discrimination must be held accountable, and those with the most power have the most to account for”. The response also states that currently, a small group of “mostly white” male executives makes decisions at Google that significantly impact workers and the world with “little accountability”. “We have all the evidence we need that Google’s leadership does not have our best interests at heart. We need to change the way the system works, above and beyond addressing the wrongs of those who work within the system,” reads the post. The lawsuit filed by Martin partly relies on non-public evidence i.e. Alphabet’s board meetings in 2014 (concerns Rubin) and 2016 (concerns Singhal), that shows the board members discussing severance packages for Rubin and Singhal. However, this part was heavily redacted from the public on Google’s demand. Both the meetings, the full board meeting along with the leadership development and compensation committee meeting, are covered, in the evidence that shows approved payments to Rubin. The lawsuit states that Google directors agreed to pay Rubin because they wanted to ‘ensure his silence’. This is because Google feared that if they fired him for cause, then he would publicly reveal all the details of sexual harassment and other wrongdoings within Google. Moreover, Google also asked the victims of sexual harassment to keep quiet once they found out that the sexual assault allegations were credible. “When Google covers up harassment and passes the trash, it contributes to an environment where people don’t feel safe reporting misconduct. They suspect that nothing will happen or, worse, that the men will be paid and the woman will be pushed aside”, quotes the lawsuit. For more coverage, check out the full suit filed by Martin and the two pension funds. Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration” Richard DeVaul, Alphabet executive, resigns after being accused of sexual harassment
Read more
  • 0
  • 0
  • 16075

article-image-fireeyes-global-dns-hijacking-campaign-suspects-iranian-based-group-as-the-prime-source
Savia Lobo
11 Jan 2019
3 min read
Save for later

FireEye’s Global DNS Hijacking Campaign suspects Iranian-based group as the prime source

Savia Lobo
11 Jan 2019
3 min read
FireEye, a US cybersecurity firm, have disclosed details about their DNS hijacking campaign. In their recent report, the company shared that they have identified huge DNS hijacking affecting multiple domains belonging to government, telecommunications and internet infrastructure entities across the Middle East and North Africa, Europe and North America. FireEye analysts believe an Iranian-based group is the source behind these attacks, although they do not have a definitive proof. The analysts also said that “they have been tracking this activity for several months, mapping and understanding the innovative tactics, techniques and procedures (TTPs) deployed by the attacker”. The FireEye Intelligence team has also identified an access from Iranian IPs to machines used to intercept, record and forward network traffic. The team also mentions that these IP addresses were previously observed during the response to an intrusion attributed to Iranian cyber espionage actors. The FireEye report highlights three different techniques used to conduct these attacks. Techniques to manipulate the DNS records and enable victim compromises 1. Altering DNS A Records Source: FireEye Here the attackers first logged into a proxy box used to conduct non-attributed browsing and as a jumpbox to other infrastructure. The attacker then logs into the DNS provider’s administration panel, utilising previously compromised credentials. Attackers change the DNS records for victim’s mail server in order to redirect it to their own mail server. They have used Let’s Encrypt certificates to support HTTPS traffic, and a load balancer to redirect victims back to the real email server after they've collected login credentials from victims on their shadow server. The username, password and domain credentials are harvested and stored. 2. Altering DNS NS Records Source: FireEye This technique is the same as the previous one. However, here the attacker exploits a previously compromised registrar or ccTLD. 3. A DNS Redirector Source: FireEye This technique is a conjunction of the previous two. The DNS Redirector is an attacker operations box which responds to DNS requests. Here, if the domain is from inside the company, OP2 responds with an attacker-controlled IP address, and the user is re-directed to the attacker-controlled infrastructure. Analysts said that a large number of organizations have been affected by this pattern of DNS record manipulation and fraudulent SSL certificates. These include telecoms and ISP providers, internet infrastructure providers, government and sensitive commercial entities. According to FireEye report, “While the precise mechanism by which the DNS records were changed is unknown, we believe that at least some records were changed by compromising a victim's domain registrar account.” To know more about this news in detail, read the FireEye report. FireEye reports North Korean state sponsored hacking group, APT38 is targeting financial institutions Reddit posts an update to the FireEye’s report on suspected Iranian influence operation Justice Department’s indictment report claims Chinese hackersbreached business and government network  
Read more
  • 0
  • 0
  • 11188
Modal Close icon
Modal Close icon