Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-vfemail-suffers-complete-data-wipe-out
Savia Lobo
22 Feb 2019
3 min read
Save for later

VFEMail suffers complete data wipe out!

Savia Lobo
22 Feb 2019
3 min read
On Monday, 11th February, Wisconsin-based email provider, VFEmail, was attacked by an intruder who trashed all of the company’s primary and backup data in the United States. Initial signs of this attack were noticed on Monday, February 11, when users started shooting tweets on the company’s Twitter account stating that they were no longer receiving messages. According to Krebs on Security, “VFEmail tweeted that it had caught a hacker in the act of formatting one of the company’s mail servers in the Netherlands.” Another tweet followed this stating, “nl101 is up, but no incoming email. I fear all US-based data may be lost.” Following this, VFEmail’s founder, Rick Romero, tweeted yesterday, “Yes, @VFEmail is effectively gone. It will likely not return. I never thought anyone would care about my labor of love so much that they'd want to completely and thoroughly destroy it.” https://twitter.com/Havokmon/status/1095297448082317312 Another tweet on the VFEMail account said that the attacker formatted all disks on every server. VFEmail has lost every VM and all files hosted on the available servers. “NL was 100% hosted with a vastly smaller dataset. NL backups by the provider were intact, and service should be up there.” https://twitter.com/VFEmail/status/1095038701665746945 Romero has posted certain updates on the company’s website, one of which includes, “We have suffered catastrophic destruction at the hands of a hacker, last seen as aktv@94.155.49.9”. He also wrote, “ At this time I am unsure of the status of existing mail for US users. If you have your own email client, DO NOT TRY TO MAKE IT WORK. If you reconnect your client to your new mailbox, all your local mail will be lost.” John Senchak, a longtime VFEmail user from Florida, told Krebs on Security, that the attack completely deleted his entire inbox at the company--some 60,000 emails sent and received over more than a decade were lost. He also said, “It looked like the IP was a Bulgarian hosting company. So I’m assuming it was just a virtual machine they were using to launch the attack from. There definitely was something that somebody didn’t want found. Or, I really pissed someone off. That’s always possible.” The company has assured the users that they are working to recover the data as soon as possible. To know more about this news and stay updated, read VFEMail’s complete Twitter thread. Security researchers discloses vulnerabilities in TLS libraries and the downgrade attack on TLS 1.3 Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack Apple’s CEO, Tim Cook calls for new federal privacy law while attacking the ‘shadow economy’ in an interview with TIME
Read more
  • 0
  • 0
  • 11482

article-image-the-ionic-team-announces-the-release-of-ionic-react-beta
Bhagyashree R
22 Feb 2019
2 min read
Save for later

The Ionic team announces the release of Ionic React Beta

Bhagyashree R
22 Feb 2019
2 min read
Yesterday, the team behind Ionic announced the beta release of Ionic React. Developers can now make use of all the Ionic components in their React application. Ionic React ships with almost 70 components including buttons, cards, menus, tabs, alerts, modals, and much more. It also supports TypeScript type definitions. Ionic is an open source framework that consists of UI components for building cross-platform applications. These components are written in HTML, CSS, and JavaScript and can easily be deployed natively to iOS and Android devices, desktop with Electron, or to the web as a progressive web app. Historically, Ionic has been associated with Angular, but this changed with its recent release, Ionic 4. Now, developers can use the Ionic app development framework alongside any frontend framework. The Ionic team has been working towards making Ionic work with React and Vue for a long time. React developers already have React Native to make native apps for iOS and Android, but with Ionic React they will also be able to create hybrid mobile, desktop, and progressive web apps. In the future, the team is also planning to make React Native and Ionic work together. You can easily get started with Ionic React with the create-react-app tool. The Ionic team also recommends users to TypeScript in their apps to get a better developer experience. As Ionic React is still in its early days, it is advised not to use it in production. To read the full announcement, visit Ionic’s official website. Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular Ionic v4 RC released with improved performance, UI Library distribution and more Ionic framework announces Ionic 4 Beta
Read more
  • 0
  • 0
  • 16321

article-image-jfrog-acquires-devops-startup-shippable-for-an-end-to-end-devops-solution
Melisha Dsouza
22 Feb 2019
2 min read
Save for later

JFrog acquires DevOps startup ‘Shippable’ for an end-to-end DevOps solution

Melisha Dsouza
22 Feb 2019
2 min read
JFrog, a leading company in DevOps has acquired Shippable- a cloud-based startup that focuses on Kubernetes-ready continuous integration and delivery (CI/CD), helping developers to ship code and deliver app and microservices updates. This strategic acquisition- JFrog’s fifth-  aims at providing customers with a “complete, integrated DevOps pipeline solution”. The collaboration between JFrog and Shippable will allow users to automate their development processes right from the time the code is committed all the way to production. Shlomi Ben Haim, Co-founder, and CEO of JFrog, says in the official press release that “The modern DevOps landscape requires ever-faster delivery with more and more automation. Shippable’s outstanding hybrid and cloud native technologies will incorporate yet another best-of-breed solution into the JFrog platform. Coupled with our commitments to universality and freedom of choice, developers can expect a superior out-of-the-box DevOps platform with the greatest flexibility to meet their DevOps needs.” According to an email sent to Packt Hub, JFrog, will now allow developers to have a completely integrated DevOps pipeline with JFrog, while still retaining the full freedom to choose their own solutions in JFrog’s universal DevOps model. The plan is to release the first technology integrations with JFrog Enterprise+ this coming summer, and a full integration by Q3 of this year. According to JFrog, this acquisition will result in a more automated, complete, open and secure DevOps solution in the market. This is just another victory for JFrog. JFrog has previously announced a $165 million Series D funding. Last year, the company also launched JFrog Xray, a binary analysis tool that performs recursive security scans and dependency analyses on all standard software package and container types. Avi Cavale, founder and CEO of Shippable, says that Shippable users and customers will now “have access to leading security, binary management and other high-powered enterprise tools in the end-to-end JFrog Platform”, and that the combined forces of JFrog and Shippable can make full DevOps automation from code to production a reality. Spotify acquires Gimlet and Anchor to expand its podcast services Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Adobe Acquires Allegorithmic, a popular 3D editing and authoring company
Read more
  • 0
  • 0
  • 14786

article-image-redis-labs-moves-from-apache2-modified-with-commons-clause-to-redis-source-available-license-rsal
Melisha Dsouza
22 Feb 2019
3 min read
Save for later

Redis Labs moves from Apache2 modified with Commons Clause to Redis Source Available License (RSAL)

Melisha Dsouza
22 Feb 2019
3 min read
Redis Labs joins the streak of software firms tweaking their licenses to prevent cloud service providers from misusing their open source code. Today, Redis Labs announced a change in their license from Apache2 modified with Commons Clause to Redis Source Available License (RSAL). This has been the second time that the company has changed its license. Back in August 2018, Redis Labs changed the license of their Redis Modules from AGPL to Apache2 modified with Commons Clause, to ensure that open source companies would continue to contribute to their projects and maintain sustainable business in the cloud era. This move was initially received with some skepticism when some people incorrectly assumed that the Redis core went proprietary, which was wrong to assume. Relating this move to open source companies like MongoDB and Confluent, Redis Labs says that every company has taken a different approach to stop cloud providers from exploiting open source projects developed by others by packaging them into proprietary services, and using their “monopoly power to generate significant revenue streams”. Feedback from multiple users to improve their license to favor developers’ needs identified three major areas needed to be addressed: The term Apache2 modified by Commons Clause caused confusion with some users, who thought they were only bound by the Apache2 terms. Common Clause’s language included the term “substantial” as a definition for what is and what isn’t allowed- there was a lack of clarity around the meaning of this term. Some Commons Clause restrictions regarding “support” worked against Redis Lab’s intention to help grow the ecosystem around Redis Modules. Taking all of this into consideration, Redis Labs has changed the license of Redis Modules to Redis Source Available License (RSAL).   What is Redis Source Available License (RSAL)? RSAL is a software license created by Redis Labs applicable only to a certain Redis Modules running on top of open source Redis. This aims to grant equivalent rights to permissive open source licenses for the vast majority of users. This license will “allow developers to use the software; modify the source code, integrate it with an application; and use, distribute or sell their application.” The RSAL introduces just one restriction; the application cannot be a database, a caching engine, a stream processing engine, a search engine, an indexing engine or an ML/DL/AI serving engine. According to Yiftach Shoolman, Co-founder and Chief Technology Officer, Redis Labs, this movement will not have any effect on the  Redis core license and shouldn’t really affect most developers who use the company’s modules (and these modules are RedisSearch, RedisGraph, RedisJSON, RedisML, and RedisBloom). Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code Confluent, an Apache Kafka service provider adopts a new license to fight against cloud service providers  
Read more
  • 0
  • 0
  • 13550

article-image-qt-creator-4-9-beta-released-with-qml-support-programming-language-support-and-more
Amrata Joshi
22 Feb 2019
2 min read
Save for later

Qt Creator 4.9 Beta released with QML support, programming language support and more!

Amrata Joshi
22 Feb 2019
2 min read
The team at Qt has been coming with a lot of developments lately. This week, the Qt team released the Qt Design Studio 1.1 and yesterday the team announced the release of Qt Creator 4.9 Beta. What’s new in Qt Creator 4.9 Beta? Generic programming language support In this release, the team has added support for the document outline and for code actions, which allow the language server to suggest fixes or refactoring actions at a specified place in the code. The Qt highlighter is now based on the KSyntaxHighlighting library, which is the library used in KDE. Projects `Expand All` has been added to context menu. The `Close All Files in Project` action is now supported in this release. It’s now possible to close all the files of the project once the project is closed. C++ Support The UI for diagnostics from the Clang analyzer tools has been improved as they are grouped by file now. Diagnostics from the project’s headers files are also included. This release comes with a guard against applying Fix-its to files that have changed in the meantime. In the Clazy configuration, it is now possible to enable or disable individual checks. QML Support The QML parser has been updated to Qt 5.12. This release comes with an added support for ECMAScript 7. Support for Python This release comes with added project templates for Qt for Python. Nim Support The code completion has been added in this release based on NimSuggest. Profiling This release comes with Perf, which is a powerful performance profiling tool for software running on a Linux system. Operating systems For Windows, this release comes with an added support for MSVC 2019. On macOS, there is an added Touch Bar for running Qt Creator. For remote Linux targets, the Botan-based SSH backend has been exchanged by the use of OpenSSH tools. Major Fixes The dragging file from the `Projects` view has been fixed.   The crash on `Find Usages` has been fixed. To know more about this news, check out Qt’s official blog post. Qt Design Studio 1.1 released with Qt Photoshop bridge, updated timeline and more Qt for Python 5.12 released with PySide2, Qt GUI and more Qt team releases Qt Creator 4.8.0 and Qt 5.12 LTS
Read more
  • 0
  • 0
  • 15759

article-image-gitlab-considers-moving-to-a-single-rails-codebase-by-combining-the-two-existing-repositories
Amrata Joshi
22 Feb 2019
4 min read
Save for later

GitLab considers moving to a single Rails codebase by combining the two existing repositories

Amrata Joshi
22 Feb 2019
4 min read
The team at GitLab is now considering to move towards a single Rails repository by combining the two existing repositories. Although the GitLab Community Edition code would remain open source and MIT licensed and also the GitLab Enterprise Edition code would remain source available and proprietary. The challenges with having two repositories? The Ruby on Rails code of GitLab is currently maintained in two repositories. The gitlab-ce repository for the code with an open source license and the gitlab-ee repository for code with a proprietary license which is source available. Having two similar but separate repositories, can make feature development difficult and error-prone while making any change in GitLab. To demonstrate the problem, the team at GitLab has given a few examples: Duplicated work during feature development The frontend only Merge Request needed a backport to CE repository. Backporting requires creating duplicate work in order to avoid future conflicts and changes to the code to support the feature. A simple change can break master A minor change in CE repository failed the pipeline in the master branch. Conflicts during preparation for regular releases There might be conflicts during preparation for a regular release, e.g. 11.7.5 release. Merge requests for both the CE repository and EE repository need to be created. And when the pipelines pass, the EE repository would require a merge from the CE repository. This would cause additional conflicts, pipeline failures, and similar delays during which the CE distribution release also would get delayed. Steps taken by GitLab team to improve the situation Before 2016, merging the CE repository into the EE repository was done when the team was ready to cut a release and the number of commits was small so it could be done by one person. As the number of commits between the two repositories grew in 2016, so the task got divided between seven developers who were responsible for merging the code once a day. This worked fine for some time until delays started happening due to failed specs or difficult merge conflicts. The team then merged an MR that allowed the creation of automated MRs between the two repositories by the end of 2017. This task ran every three hours, which allowed for a smaller number of commits to be worked on. The number of changes going into CE and EE repositories grew to thousands of commits in some cases by the end of 2018. This made the automated MR insufficient. The Merge Train tool was then created to automate these workflows further and which automatically rejected merge conflicts and preferred changes from one repository over the other. What is the GitLab team proposing? The gitlab-ce distribution package consists of gitlab-ce repository which offers only the Core feature set.  The gitlab-ee distribution package consists of gitlab-ee repository. The change which the team is considering proposing would be to merge the gitlab-ce and gitlab-ee repositories into a single gitlab repository. The design for merging two codebases requires the work and process changes in detail. Though the proposed change would pertain only to the Ruby on Rails repository. Expected changes The gitlab-ce and gitlab-ee repositories may be replaced with a gitlab repository, with all open issues and merge requests moved into the single repository. All frontend assets such as JavaScript, CSS, images, views will be open sourced under the MIT license. The proprietary backend code will be located in the /ee repository. All the documentation that will be merged together will now clearly state which features belong to which feature set. The downsides The GitLab team was clear about the possible downsides of this approach: Users with installations from the source are currently cloning the gitlab-ce repository. The clone will also fetch the proprietary code in /ee directory. The database migration code is open source and does not require additional maintenance so there is no additional work required. The team is now exploring better ways to solve the problem of busy work and plans to bring improvements to the current proposal. To know more about this news, check out the post by GitLab. Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications GitLab 11.7 releases with multi-level child epics, API integration with Kubernetes, search filter box and more Why moving from a monolithic architecture to microservices is so hard, Gitlab’s Jason Plum breaks it down [KubeCon+CNC Talk]
Read more
  • 0
  • 0
  • 11921
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ipython-7-3-releases-with-conda-and-pip-magics-and-python-3-8-compatibility
Bhagyashree R
22 Feb 2019
2 min read
Save for later

IPython 7.3 releases with %conda and %pip magics and Python 3.8 compatibility

Bhagyashree R
22 Feb 2019
2 min read
This Monday, Matthias Bussonnier, a core developer of the IPython and Jupyter Project team, announced the release of IPython 7.3. Along with some major bug fixes, this release comes with the %conda and %pip magics and compatibility with Python 3.8. The %conda and %pip magics IPython offers magic functions as an added enhancement on top of the Python syntax that is intended to solve common problems like data analysis using Python. The biggest update the team has introduced to this release is the implementation of the %conda and %pip magics. These magics will automatically install packages into the kernel that is currently running in an IPython or Jupyter notebook session. The %pip magic was already available, but it was limited to printing a warning and now, it will actually forward commands to pip. Users will still need to restart the interpreter or kernel for the newly installed packages to be taken into account. Though this update is great, users are recommended to use the conda/pip commands as their preferred way for installing. Bug fixes This release is compatible with Python 3.8, which comes with the addition of Assignment Expressions, better thread safety, and more. To opt out of shell variable expansion, the `@magic.no_var_expand` decorator is added to the execution magics. The behavior of the %reset magic has been changed by initializing the posix aliases `clear`, `less`, `more`, and `man` during a reset. IPython command line now will allow running *.ipynb files. To read more about the updates in IPython 7.3, check out its official announcement. IPython 7.2.0 is out! IPython 7.0 releases with AsyncIO Integration and new Async libraries PyPy 7.0 released for Python 2.7, 3.5, and 3.6 alpha
Read more
  • 0
  • 0
  • 4898

article-image-google-finally-ends-forced-arbitration-for-all-its-employees
Natasha Mathur
22 Feb 2019
4 min read
Save for later

Google finally ends Forced arbitration for all its employees

Natasha Mathur
22 Feb 2019
4 min read
Google announced yesterday that it is ending forced arbitration for its full-time employees as well as for the Temps, Vendors, and Contractors (TVCs) for cases of harassment, discrimination or wrongful termination. The changes will go into effect starting March 21 and employees will be able to litigate their past claims. Moreover, Google has also lifted the ban on class action lawsuits for the employees, reports WIRED. https://twitter.com/GoogleWalkout/status/1098692468432867328 In the case of contractors, Google has removed forced arbitration from the contracts of those who work directly with the firm. But, outside firms employing contractors are not required to follow the same. Google, however, will notify other firms and ask them to consider the approach and see if it works for them. Although this is very good news, the group called ‘Googlers for ending forced arbitration’ published a post on Medium stating that the “fight is not over”. They have planned a meeting with legislators in Washington D.C. for the next week, where six members of the group will advocate for an end to forced arbitration for all workers. “We will stand with Senators and House Representatives to introduce multiple bills that end the practice of forced arbitration across all employers. We’re calling on Congress to make this a law to protect everyone”, states the group. https://twitter.com/endforcedarb/status/1098697243517960194 It was back in November when 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment encountered at Google’s workplace. Google had waived forced arbitration for sexual harassment and assault claims, as a response to Google walkout (a move that was soon followed by Facebook), but employees were not convinced. Also, the forced arbitration policy was still applicable for contractors, temps, vendors, and was still in effect for other forms of discrimination within the firm. This was soon followed by Google contractors writing an open letter on Medium to Sundar Pichai, CEO, Google, in December, demanding him to address their demands of better conditions and equal benefits for contractors. Also, Googlers launched an industry-wide awareness campaign to fight against forced arbitration last month, where they shared information about arbitration on their Twitter and Instagram accounts throughout the day.  The employees mentioned in a post on Medium that there were “no meaningful gains for worker equity … nor any actual change in employee contracts or future offer letters”. The pressure on Google regarding more transparency around its sexual assault policies had been building up for quite a while. For instance, two shareholders, James Martin, and two other pension funds sued Alphabet’s board members, last month, for protecting the top execs accused of sexual harassment. The lawsuit urged for more clarity surrounding Google’s policies. Similarly, Liz Fong Jones, developer advocate at Google Cloud platform, revealed earlier last month, that she was leaving Google due to its lack of leadership in case of the demands made by employees during the Google walkout. Jones also published a post on Medium, last week, where she talked about the ‘grave concerns’ she had related to the strategic decisions made at Google.   “We commend the company in taking this step so that all its workers can access their civil rights through public court. We will officially celebrate when we see these changes reflected in our policy websites and/or employment agreements”, states the end forced arbitration group. Public reaction to the news is largely positive, with people cheering on Google employees for the victory: https://twitter.com/VidaVakil/status/1098773099531493376 https://twitter.com/jas_mint/status/1098723571948347392 https://twitter.com/teamcoworker/status/1098697515858182144 https://twitter.com/PipelineParity/status/1098721912111464450 Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus
Read more
  • 0
  • 0
  • 15392

article-image-linux-use-after-free-vulnerability-found-in-linux-2-6-through-4-20-11
Savia Lobo
21 Feb 2019
2 min read
Save for later

Linux use-after-free vulnerability found in Linux 2.6 through 4.20.11

Savia Lobo
21 Feb 2019
2 min read
Last week, a Huawei engineer reported a vulnerability present in the early Linux 2.6 kernels through version 4.20.11. The Kernel Address Sanitizer (KASAN) that detects dynamic memory errors within the Linux kernel code was used to uncover the use-after-free vulnerability which was present since early Linux versions. The use-after-free issue was found in the networking subsystem's sockfs code and could lead to arbitrary code execution as a result. KASAN (along with the other sanitizers) have already proven quite valuable in spotting various coding mistakes hopefully before they are exploited in the real-world. The Kernel Address Sanitizer picked up another feather in its hat with being responsible for the CVE-2019-8912 discovery. The CVSS v3.0 Severity and Metrics gave this vulnerability a 9.8 CRITICAL score. A fix for this vulnerability is already released and will come to all Linux distributions in a couple of days, and will probably be backported to any supported Linux kernel versions. According to a user on Hacker News, “there may not actually be a proof-of-concept exploit yet, beyond a reproducer causing a KASAN splat. When people request a CVE for a use-after-free bug they usually just assume that code execution may be possible.” To know more about this vulnerability, visit the NVD website. Intel releases patches to add Linux Kernel support for upcoming dedicated GPU releases Undetected Linux Backdoor ‘SpeakUp’ infects Linux, MacOS with crypto miners OpenWrt 18.06.2 released with major bug fixes, updated Linux kernel and more!
Read more
  • 0
  • 0
  • 12040

article-image-kde-adds-matrix-for-improved-im-services
Melisha Dsouza
21 Feb 2019
2 min read
Save for later

KDE adds Matrix for improved IM services

Melisha Dsouza
21 Feb 2019
2 min read
Yesterday, KDE announced that it is officially adopting Matrix as a primary chat platform and the kde.org has an official Matrix homeserver. According to KDE, Matrix will provide the correct footing for an improved way of chatting and live-sharing information and provide features that users expect from more modern IM services. This will include infinite scrollback, file transfer, typing notifications, read receipts, presence, search, conferencing, end-to-end encryption and much more. Other alternatives like Telegram, Slack and Discord, although feature-rich, did not make the cut as “they are centralized and built around closed-source technologies and offer even less control than IRC”, which was used by the team as a solution for a long time. Matrix supports decentralized communication through its open protocol and network and is making its mark in instant messaging as well as other fields such as IoT communication. It supports end-to-end encryption while being self-hosted and open sourced. KDE will be adopting Matrix as an open protocol, open network, and FOSS project. Matrix blog states that “There’s now a shiny new homeserver (powered by Modular.im) on which KDE folk are welcome to grab an account if they want, rather than sharing the rather overloaded public matrix.org homeserver. The rooms have been set up on the server to match their equivalent IRC channels. The rooms continue to retain their other aliases (#kde:matrix.org, #freenode_#kde:matrix.org etc) as before.” Head over to KDE’s official blog to know more about this announcement. How to create a native mobile app with React Native [Tutorial] 250 bounty hunters had access to AT&T, T-Mobile, and Sprint customer location data, Motherboard reports Wells Fargo’s online and mobile banking operations suffer a major outage  
Read more
  • 0
  • 0
  • 7794
article-image-googles-home-security-system-nest-secures-had-a-hidden-microphone-google-says-it-was-an-error
Melisha Dsouza
21 Feb 2019
2 min read
Save for later

Google’s home security system, Nest Secure’s had a hidden microphone; Google says it was an “error”

Melisha Dsouza
21 Feb 2019
2 min read
Earlier this month, Google upgraded its home security and alarm system, Nest Secure to work with its Google Assistant. This meant that Nest Secure customers would be able to perform tasks like asking Google about the weather. The device came with a microphone for this purpose, without it being mentioned on the device’s published specifications. On Tuesday, a Google spokesperson got in touch with Business Insider and told them that the miss was an “error” on their part. “The on-device microphone was never intended to be a secret and should have been listed in the tech specs. Further, the Nest team added that the microphone has “never been on” and is activated only when users specifically enable the option. As an explanation as to why the microphone was installed in the devices, the team said that it was in order to support future features “such as the ability to detect broken glass.” Before sending over an official statement to Business Insider, the Nest team replied to a similar concern from a user on Twitter, in early February. https://twitter.com/treaseye/status/1092507172255289344 Scott Galloway, professor of marketing at the New York University Stern School of Business, has expressed strong sentiments regarding this news on Twitter https://twitter.com/profgalloway/status/1098228685155508224 Users have even accused Google of “pretending the mistake happened” and slammed Google over such an error. https://twitter.com/tshisler/status/1098231070275686400 https://twitter.com/JoshConstine/status/1098086028353720320   Apart from Google, there have also been multiple cases in the past of Amazon Alexa and Google home listening to people’s conversations, thus invading privacy. Earlier this year, a family in Portland, discovered that its Alexa-powered Echo device had recorded their private conversation and sent it to a random person in their contacts list. Google’s so-called “error” can lead to a drop in the number of customers buying its home security system as well as a drop in the trust users place  in Google’s products. It is high time Google starts thinking along the line of security standards and integrity maintained in its products. Amazon’s Ring gave access to its employees to watch live footage of the customers, The Intercept reports Email and names of Amazon customers exposed due to ‘technical error’; number of affected users unknown Google Home and Amazon Alexa can no longer invade your privacy; thanks to Project Alias!  
Read more
  • 0
  • 0
  • 18398

article-image-facebook-and-google-pressurized-to-work-against-anti-vaccine-trends-after-pinterest-blocks-anti-vaccination-content-from-its-pinboards
Amrata Joshi
21 Feb 2019
5 min read
Save for later

Facebook and Google pressurized to work against ‘Anti-Vaccine’ trends after Pinterest blocks anti-vaccination content from its pinboards

Amrata Joshi
21 Feb 2019
5 min read
U.S. lawmakers are questioning health officials and tech giants on their efforts to combat the harmful anti-vaccine misinformation which is spreading at a fast pace online. This misinformation is also potentially responsible for adding on to the ongoing five measles outbreaks in the US. Pinterest stands against misinformation Pinterest has taken a strong stand against the spread of misinformation related to vaccines. It has blocked all “vaccination” related searches since most results showed scientifically disproven claim that vaccines aren’t safe. On Wednesday the company said it won't return any search results, including pins and boards, for the terms related to vaccinations, whether in favor or against them. It was noted that the majority of shared images on Pinterest cautioned people against vaccinations, despite medical guidelines demonstrating that most vaccines are safe for most people. And the company has been taking an effort since quite some time now. In a statement to CNBC, Pinterest told, “It's been hard to remove this anti-vaccination content entirely, so it put the ban in place until it can figure out a more permanent strategy. It's working with health experts including doctors, as well as the social media analysis company called Storyful to come up with a better solution.” Pinterest has taken steps for blocking content promoting false cancer cures. Pinterest realized that a lot of such content was redirecting users to websites that discouraged them from getting traditional medical treatment, such as an essential oil claiming to be a cure for cancer. A Pinterest spokesperson said, "We want Pinterest to be an inspiring place for people, and there's nothing inspiring about misinformation. That's why we continue to work on new ways of keeping misleading content off our platform and out of our recommendations engine." People have been appreciating this move by the company. https://twitter.com/ekp/status/1098421637194559489 Facebook and Google working against the ‘Anti-Vaccine’ trends Last week, the committee announced that it will hold a hearing on the anti-vaccine subject on 5th March. Even Adam Schiff, a democrat from California, sent letters to Sundar Pichai, CEO at Google and Facebook CEO Mark Zuckerberg. In the letters, Schiff expressed concerns over the outbreaks and regarding the role of tech companies in the spread of medically inaccurate information. Schiff wrote in the letters, “If concerned parents see phony vaccine information in their Facebook newsfeeds or YouTube recommendations, it could cause them to disregard the advice of their children’s physicians and public health experts and decline to follow the recommended vaccination schedule. Repetition of information, even if false, can often be mistaken for accuracy, and exposure to anti-vaccine content via social media may negatively shape user attitudes towards vaccination." Schiff even referenced an article by Guardian and reported that searches on both Facebook and YouTube easily led users to anti-vaccine news. He also expressed his concerns over a report that Facebook is accepting payments for anti-vaccine ads. Last week, an article published by The Daily Beast noted that seven Facebook pages to post and promote anti-vaccine news and targeted women over the age of 25. In an emailed statement to Ars Technica, Facebook said, “We’ve taken steps to reduce the distribution of health-related misinformation on Facebook, but we know we have more to do. We’re currently working on additional changes that we’ll be announcing soon.” According to Facebook, just by simply deleting anti-vaccine perspectives won’t work as an effective solution to the problem. They are thinking about ways to boost the availability of factual information on vaccines and further minimizing the spread of misinformation. In a statement to Bloomberg, Facebook said that it was considering “reducing or removing this type of content from recommendations, including Groups You Should Join, and demoting it in search results, while also ensuring that higher quality and more authoritative information is available." Schiff wrote, “The algorithms which power these services are not designed to distinguish quality information from misinformation or misleading information, and the consequences of that are particularly troubling for public health issues.” Even on YouTube, the first result under a search for "vaccines" is a video showing a “middle ground” debate between supporters of vaccines and the ones against it. The fourth result is an episode of a popular anti-vaccine documentary series called “The Truth About Vaccines” which has around 1.2 million views. Though Google has declined to comment on the letter from Schiff, the company noted that it has been working to improve its recommendation system. It is also making sure that relevant news sources and contextual information are at the top of search results. Schiff also mentioned in his letter that he was happy how Google has already taken steps to improve the situation. He writes, “I was pleased to see YouTube’s recent announcement that it will no longer recommend videos that violate its community guidelines, such as conspiracy theories or medically inaccurate videos, and encourage further action to be taken related to vaccine misinformation.” Last week, Lamar Alexander, chairman of the Senate health committee, along with ranking member Patty Murray (D-Wash.) wrote a letter to the Centers for Disease Control and Prevention and Health and Human Services. The lawmakers inquired about what the health officials were doing for fighting misinformation and help states deal with outbreaks. The lawmaker wrote, “Many factors contribute to vaccine hesitancy, all of which demand attention from CDC and (HHS’ National Vaccine Program Office).” WhatsApp limits users to five text forwards to fight against fake news and misinformation Facebook plans to change its algorithm to demote “borderline content” that promotes misinformation, and hate speech on the platform Facebook’s AI algorithm finds 20 Myanmar Military Officials guilty of spreading hate and misinformation, leads to their ban
Read more
  • 0
  • 0
  • 10665

article-image-mozilla-shares-key-takeaways-from-the-design-tools-survey
Bhagyashree R
21 Feb 2019
2 min read
Save for later

Mozilla shares key takeaways from the Design Tools survey

Bhagyashree R
21 Feb 2019
2 min read
Last year in November, Victoria Wang, a UX designer at Mozilla, announced the initiation of a Design Tools survey. The motivation behind this survey was to get an insight into the biggest CSS and web design issues developers and designers face. She shared the results of this survey yesterday. This survey received more than 900 responses, which revealed the following issues: Source: Mozilla One of the main takeaways from this survey was that developers and designers, irrespective of their experience level, want to better understand CSS issues like unexpected scrollbars and sizing. Cross-browser compatibility was also one of the top issues. Now, the Firefox DevTools team is trying to find out ways to ease the pain of debugging browser difference, including auditing, hints, and a more robust responsive design tool. Some of the mid-ranked issues included building Flexbox layouts, building with CSS Grid Layout, and ensuring accessibility. To address this, the team is planning to improve the Accessibility Panel. Among the lowest-ranked issues included Lack of good Visual/WYSIWYG tools, Animations, WebGL, and SVG. Source: Mozilla The top issues developers face when working with a browser tool was “No easy way to move CSS changes back to the editor”. To address this issue the Mozilla team is planning to add export options to their Changes Panel and also introduce DOM breakpoints. You can read more about this Design Tools survey on Mozilla’s official website. Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant Mozilla releases Firefox 65 with support for AV1, enhanced tracking protection, and more!
Read more
  • 0
  • 0
  • 17735
article-image-openai-team-publishes-a-paper-arguing-that-long-term-ai-safety-research-needs-social-scientists
Natasha Mathur
21 Feb 2019
3 min read
Save for later

OpenAI team publishes a paper arguing that long term AI safety research needs social scientists

Natasha Mathur
21 Feb 2019
3 min read
OpenAI, a non-profit artificial intelligence research firm, published a paper yesterday, arguing that long term AI safety research needs social scientists to make sure that AI alignment algorithms succeed when actual humans are involved. AI alignment (or value alignment) refers to the task of ensuring that AI systems reliably do what humans want them to do. “Since we are trying to behave in accord with people’s values, the most important data will be data from humans about their values”, states the OpenAI team. However, to properly align the advanced AI systems with human values, many uncertainties that are related to the psychology of human rationality, emotion, and biases would have to be resolved. The researchers believe that these can be resolved via experimentation where they train the AI to do what humans want them to do (reliably) by studying humans. This would involve questioning people about what they want from AI, and then training the machine learning models based on this data. Once the models are trained, they can then be optimized to perform well as per these models. But, it’s not that simple. This is because humans can’t be completely relied upon when it comes to answering questions related to their values. “Humans have limited knowledge and reasoning ability, and exhibit a variety of cognitive biases and ethical beliefs that turn out to be inconsistent on reflection”, states the OpenAI team. Researchers believe that different ways that a question is presented can interact differently with human biases, which in turn, can produce either low or high-quality answers. To further solve this issue, researchers have come out with experimental debate comprising only of humans in place of the ML agents. Now, although these experiments will be motivated by ML algorithms, they will not involve any ML systems or need any kind of ML background. OpenAI “Our goal is ML+ML+human debates, but ML is currently too primitive to do many interesting tasks. Therefore, we propose replacing ML debaters with human debaters, learning how to best conduct debates in this human-only setting, and eventually applying what we learn to the ML+ML+human case”, reads the paper. Now, as all of this human debate doesn’t require any machine learning, it becomes a purely social science experiment that is motivated by ML considerations but does not need ML expertise to run. This, in turn, makes sure that the core focus is on the component of AI alignment uncertainty specific to humans. Researchers state that a large proportion of AI safety researchers are focused on machine learning, even though it is not necessarily a sufficient background to conduct these experiments. This is why social scientists with experience in human cognition, behavior, and ethics, are needed for the careful design and implementation of these rigorous experiments. “This paper is a call for social scientists in AI safety. We believe close collaborations between social scientists and ML researchers will be necessary to improve our understanding of the human side of AI alignment and hope this paper sparks both conversation and collaboration”, states the researchers. For more information, check out the official research paper. OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words OpenAI charter puts safety, standards, and transparency first OpenAI launches Spinning Up, a learning resource for potential deep learning practitioners
Read more
  • 0
  • 0
  • 13414

article-image-onionshare-2-an-open-source-tool-that-uses-tor-onion-services-for-securely-sharing-files-is-now-out
Bhagyashree R
21 Feb 2019
3 min read
Save for later

OnionShare 2, an open source tool that uses Tor onion services for securely sharing files, is now out!

Bhagyashree R
21 Feb 2019
3 min read
This Monday, the community behind OnionShare has released its next major version, OnionShare 2. This release comes with macOS sandbox enabled by default, support for next-generation onion services, several new translations, and more. OnionShare is a free, open-source tool which allows users to share and receive files securely and anonymously using Tor onion services. Following are some of the updates introduced in OnionShare 2: The macOS sandbox enabled by default The macOS sandbox is enabled by default in OnionShare 2. This will prevent hackers from accessing data or running programs on user computers, even if they manage to exploit a vulnerability in OnionShare. Next generation Tor onion addresses OnionShare 2 improves security by using next-generation Tor onion service also known as v3 onion services. These next-generation Tor onion services provide onion addresses, which are unguessable address to share. These addresses look like this lldan5gahapx5k7iafb3s4ikijc4ni7gx5iywdflkba5y2ezyg6sjgyd.onion. Users can use v2 onion addresses if they want, by navigating to Setting and selecting “Use legacy addresses”. OnionShare addresses are ephemeral by default As soon as the sharing is complete, OnionShare address will completely disappear from the internet as these addresses are intended for one-time use. This behavior is enabled by default that you may want to change in case you want to share the files with a group of people. You can do that by going to the Settings menu and unchecking the "Stop sharing after files have been sent" option. Public OnionShare addresses By default, OnionShare addresses look like this http://[tor-address].onion/[slug]. In this format, the slug represents random words out of a list of 7,776 words. Even if the attacker figures out the tor-address part, they still won’t be able to download the files you are sharing or run programs on your computer. They need to know the slug, which works here as a password. But since this slug is only of two words, and the wordlist OnionShare uses is public, attackers can guess it. With this Public mode enabled, the OnionShare address will look like http://[tor-address].onion/, and the server will remain up no matter how many 404 errors it gets. OnionShare 2 comes with a Public mode that allows you to publicly share an OnionShare address. To enable this mode, just go to the Settings menu and check the box next to “Public mode”. OnionShare 2 is translated to 12 languages OnioShare 2 is translated into twelve new languages. These languages are Bengali, Catalan, Danish, French, Greek, Italian, Japanese, Persian, Portuguese Brazil, Russian, Spanish, and Swedish. You can select these languages from a dropdown. Read the complete list of updates in OnionShare 2 shared by Micah Lee, a computer security engineer. Understand how to access the Dark Web with Tor Browser [Tutorial] Brave Privacy Browser has a ‘backdoor’ to remotely inject headers in HTTP requests: HackerNews Signal introduces optional link previews to enable users understand what’s behind a URL
Read more
  • 0
  • 0
  • 5008
Modal Close icon
Modal Close icon