Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-linkedin-used-email-addresses-of-18m-non-members-to-buy-targeted-ads-on-facebook-reveals-a-report-by-dpc-ireland
Bhagyashree R
26 Nov 2018
4 min read
Save for later

LinkedIn used email addresses of 18M non-members to buy targeted ads on Facebook, reveals a report by DPC, Ireland

Bhagyashree R
26 Nov 2018
4 min read
In a report published on Friday by Ireland’s Data Protection Commissioner revealed that LinkedIn with an aim to get more people on the platform used email addresses of almost 18 million people to buy targeted ads on Facebook. It has now stopped this practice as a result of the investigation and as a solution has introduced a new feature that asks user’s permission to allow exporting email addresses. What was the DPC's investigation about? The final report by Ms. Helen Dixon, the Data Protection Commissioner shows the conclusions of the audit about LinkedIn’s processing of personal data for the period 1 January – 24 May 2018.  The audit was done after a non-LinkedIn user notified to the DPC that LinkedIn has obtained and used the complainant’s email address for the purpose of targeted advertising on the Facebook platform. This investigation revealed that LinkedIn has processed hashed email addresses of approximately 18 million non-LinkedIn members. LinkedIn implemented several actions to stop the processing of user data for the purposes that gave rise to this complaint. To make sure that LinkedIn is indeed taking right measures to solve these complaints, DPC did the investigation, which revealed: “As a result of the findings of our audit, LinkedIn Corp was instructed by LinkedIn Ireland, as data controller of EU user data, to cease pre-compute processing and to delete all personal data associated with such processing prior to 25 May 2018.” One thing that the report does not reveal is the source of these emails. Other parts of this report list cases such as the inquiry into Facial Recognition usage by Facebook, how WhatsApp and Facebook exchange user data, and the Yahoo security breach that affected 500 million users. What was LinkedIn’s response? Denis Kelleher, the Head of Privacy (EMEA), at LinkedIn told TechCrunch that they have now taken appropriate actions to cease the data breach: “We appreciate the DPC’s 2017 investigation of a complaint about an advertising campaign and fully cooperated. Unfortunately, the strong processes and procedures we have in place were not followed and for that we are sorry. We’ve taken appropriate action, and have improved the way we work to ensure that this will not happen again. During the audit, we also identified one further area where we could improve data privacy for non-members and we have voluntarily changed our practices as a result.” LinkedIn has also introduced a new privacy setting that defaults to blocking other users from exporting your email address. You can find this option under Settings & Privacy -> Privacy -> Who Can See My Email Address?  Source: TechCrunch This step could prevent some spam and give more control to the users over with whom they want to share their email address. But also, according to TechCrunch, this update could upset some users: “But the launch of this new setting without warning or even a formal announcement could piss off users who’d invested tons of time into the professional networking site in hopes of contacting their connections outside of it.” LinkedIn confirmed TechCrunch that this is a newly introduced setting to ensure better privacy of users: “This is a new setting that gives our members even more control of their email address on LinkedIn. If you take a look at the setting titled ‘Who can download your email’, you’ll see we’ve added a more detailed setting that defaults to the strongest privacy option. Members can choose to change that setting based on their preference. This gives our members control over who can download their email address via a data export.” You can read the full report at TechCrunch’s official website. Also, read the report published by DPC for more details. Read Next Creator-Side Optimization: How LinkedIn’s new feed model helps small creators Email and names of Amazon customers exposed due to ‘technical error’; number of affected users unknown Facebook shares update on last week’s takedowns of accounts involved in “inauthentic behavior”
Read more
  • 0
  • 0
  • 9080

article-image-the-ethical-mobile-os-e-mvp-beta2-ported-to-android-oreo-e-powered-smartphone-may-be-released-soon
Melisha Dsouza
26 Nov 2018
3 min read
Save for later

The ethical mobile OS, /e/-MVP beta2 ported to Android-Oreo, /e/ powered smartphone may be released soon!

Melisha Dsouza
26 Nov 2018
3 min read
Early last month, e.foundation announced the beta release of their OS called /e/  from the creator of Mandrake-Linux. The OS is completely focused on user privacy. The team recently announced that they have finished porting /e/-MVP beta2 to Android-Oreo, which means that the OS can now support many new, more recent, devices. The /e/-MVP beta2 is now supported on 49 different devices including: Xiaomi Redmi Note 5 pro Xiaomi Mi A1 Xiaomi Mi 6 Xiaomi Pocophone F1 OnePlus 5T Google Pixel XL  “At /e/, we want to build an alternative mobile operating system that everybody can enjoy using, one that is a lot more respectful of user’s data privacy while offering them real freedom of choice. We want to be known as the ethical mobile operating system, built in the public interest.” -/e/ project leader, Gaël Duval This OS is free and open-source.  Its ROM uses microG instead of Google’s core apps. /e/ has new default applications including a mail app, an SMS app (Signal), a chat application (Telegram), and much more. The Mozilla NLP makes geolocation available even when GPS signal is not available. The team has also been obtaining requests to release a smartphone with /e/. They have finally taken the suggestion into consideration and started talks with several hardware makers for the same. They have started a poll asking users which OS they would prefer on the next Fairphone. The Fairphone is a smartphone which is designed with ecological and ethical issues in mind. It is made from recycled, recyclable and responsibly-sourced along with minimal packaging. In a Fairphone, if any component breaks down or the user wants to update it, only that element need to be replaced. Apart from this short announcement on the blog, there is not much documentation for users to refer to and clarify their doubts about this project. Here is what users had to say regarding the announcement: Source: HackerNews  If you are interested in "leaving" Apple and Google and "reconquer" their privacy, read Duval's Twitter thread that answers common user queries on /e/ Head over to e.foundation’s official blog to know more about this announcement. Gaël Duval, creator of the ethical mobile OS, /e/, calls out Tim Cook for being an ‘opportunist’ in the ongoing digital privacy debate Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Android 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study  
Read more
  • 0
  • 0
  • 11365

article-image-facebook-plans-to-change-its-algorithm-to-demote-borderline-content-that-promotes-misinformation-and-hate-speech-on-the-platform
Natasha Mathur
23 Nov 2018
3 min read
Save for later

Facebook plans to change its algorithm to demote “borderline content” that promotes misinformation, and hate speech on the platform

Natasha Mathur
23 Nov 2018
3 min read
Mark Zuckerberg, CEO, Facebook published a “blueprint for content governance and enforcement”, last week, that talks about updating its news feed algorithm to demote the “borderline (click-bait) content” to curb spreading misinformation, hate speech, and bullying on its platform. Facebook has been getting itself into a lot of controversies regarding user data and privacy on its platform.  Just last week, the New York Times published a report on how Facebook follows the strategy of ‘delaying, denying, and deflecting’ the blame for all the controversies surrounding it.  Given all these controversies it goes without saying, that Facebook is trying to bring the number down. “One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content. At scale, it can undermine the quality of public discourse and lead to polarization. In our case, it can also degrade the quality of our services.”, said Zuckerberg. Here’s what the natural engagement pattern on Facebook looks like:   As per the Facebook research, it is observed that no matter where the lines are drawn for the kind of content allowed, once a piece of content gets close to that line, people engage with it more on average, despite them not liking the content. Facebook calls this an “incentive problem,” and has decided to penalize the borderline content so that it gets less distribution and engagement. The natural engagement pattern has been adjusted and now looks like this: In the graph above, distribution declines as content get more sensational, and people are disincentivized from creating provocative content that is as close to the line as possible. “We train AI systems to detect borderline content so we can distribute that content less”, adds Zuckerberg.  This process by Facebook for adjusting the curve is similar to its process for identifying harmful content but now is focused on identifying borderline content instead. Moreover, a research by Facebook has found out that the natural pattern of borderline content getting more engagement is applicable to not just news but all the different categories of content.  For instance, photos close to the line of nudity, the ones with revealing clothing or sexually suggestive positions, had more engagement on average before the distribution curve was adjusted to discourage this.  Facebook finds this issue most important to address. This is because although social networks generally expose people to more diverse views, some of the pages can still “fuel polarization”.  Therefore, Facebook has decided to apply these distribution changes not just to feed ranking but to all their recommendation systems that suggest things users should join. An alternative to reducing distribution approach is moving the line to define what kind of content is acceptable.  However, Facebook thinks that it won’t effectively address the underlying incentive problem, which is the bigger issue in hand. Since this engagement pattern exists no matter where the line is drawn, what needs to be changed is the incentive and not simply the removal of content. “By fixing this incentive problem in our services, we believe it'll create a virtuous cycle: by reducing sensationalism of all forms, we'll create a healthier, less polarized discourse where more people feel safe participating”, said Zuckerberg. Facebook’s outgoing Head of communications and policy takes blame for hiring PR firm ‘Definers’ and reveals more Facebook AI researchers investigate how AI agents can develop their own conceptual shared language Facebook shares update on last week’s takedowns of accounts involved in “inauthentic behavior”
Read more
  • 0
  • 0
  • 10394

article-image-u-s-postal-service-patches-an-api-exploit-that-impacted-60-million-usps-users-data
Savia Lobo
23 Nov 2018
4 min read
Save for later

U.S. Postal Service patches an API exploit that impacted 60 million USPS users’ data

Savia Lobo
23 Nov 2018
4 min read
Early this week, the U.S.Postal Service patched an API exploit that could allow users with an account on USPS.com to view other users' account details and also modify account details on their behalf. This exploit had an impact on 60 million USPS users. KrebsOnSecurity was contacted last week by a researcher who discovered the problem, but who asked to remain anonymous. According to KrebsOnSecurity, “The researcher said he informed the USPS about his finding more than a year ago yet never received a response. After confirming his findings, KrebsOnSecurity contacted the USPS, which promptly addressed the issue.” The problem was discovered from an authentication weakness in a USPS Web component- API, which was a part of the USPS "Informed Visibility" program designed to help mail senders with near real-time tracking data. According to KrebsOnSecurity, “the flaw let any logged-in usps.com user query the system for account details belonging to any other users, such as email address, username, user ID, account number, street address, phone number, authorized users, mailing campaign data and other information.” “Many of the API’s features accepted ‘wildcard’ search parameters, meaning they could be made to return all records for a given data set without the need to search for specific terms. No special hacking tools were needed to pull this data, other than knowledge of how to view and modify data elements processed by a regular Web browser like Chrome or Firefox”, according to KrebsOnSecurity. Nicholas Weaver, a researcher at the International Computer Science Institute and lecturer at UC Berkeley, said, “This is not even Information Security 101, this is Information Security 1, which is to implement access control. It seems like the only access control they had in place was that you were logged in at all. And if you can access other people’s data because they aren’t enforcing access controls on reading that data, it’s catastrophically bad and I’m willing to bet they’re not enforcing controls on writing to that data as well.” Following this flaw, the USPS included a validation step to prevent unauthorized changes. If anyone tries to modify the email address associated with a user’s USPS account via the API, a confirmation message will be sent to the email address tied to that account. KrebsOnSecurity states, “It does not appear USPS account passwords were exposed via this API, although KrebsOnSecurity conducted only a very brief and limited review of the API’s rather broad functionality before reporting the issue to the USPS. The API at issue resides here; a copy of the API prior to its modification on Nov. 20 by the USPS is available here as a text file.” Robert Hansen, chief technology officer at Bit Discovery, a security firm in Austin, Texas, said, “This could easily be leveraged to build up mass targeted spam or spear phishing. It should have been protected via authentication and validated against the logged in user in question.” In a statement shared with KrebsOnSecurity, the USPS said it currently has no information that this vulnerability was leveraged to exploit customer records, and that the information shared with the USPS allowed it to quickly mitigate the vulnerability. Here’s the rest of their statement: “Computer networks are constantly under attack from criminals who try to exploit vulnerabilities to illegally obtain information.  Similar to other companies, the Postal Service’s Information Security program and the Inspection Service uses industry best practices to constantly monitor our network for suspicious activity.” “Any information suggesting criminals have tried to exploit potential vulnerabilities in our network is taken very seriously. Out of an abundance of caution, the Postal Service is further investigating to ensure that anyone who may have sought to access our systems inappropriately is pursued to the fullest extent of the law.” To know more about this news in detail, visit KrebsOnSecurity website. Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019 Final release for macOS Mojave is here with new features, security changes and a privacy flaw
Read more
  • 0
  • 0
  • 3356

article-image-how-can-gitless-save-you-from-git
Amrata Joshi
23 Nov 2018
3 min read
Save for later

How can Gitless save you from Git?

Amrata Joshi
23 Nov 2018
3 min read
Git, a version-control system used for tracking changes in computer files and coordinating work on those files. It is usually said that Git is easy to learn and it enhances performance as it is fast. However, a lot of users have been facing issues with Git as many find it difficult to understand the concepts of Git and work on it. Also, the UI is not user-friendly. Issues with Git A research paper titled, A Case of Computational Thinking: The Subtle Effect of Hidden Dependencies on the User Experience of Version Control by researchers from University of Cambridge states that users don’t have much confidence in Git. They manually copy their local working set to a separate backup directory before performing any operations via Git. File tracking is one of the major issues in Git. If someone creates a new file and then starts tracking changes to it, then the new modifications might not get saved and the older version of the file might get reflected. It is also difficult to rename a file with Git. According to Santiago Perez De Rosso, a PhD student in the Software Design Group at MIT, the major issues with Git are switching branches, detached head and untracking files. Gitless makes a difference As many users have complained about Git, earlier this year, Gitless, a version control system built on top of Git was released. It is comparatively easier than Git. Since, Gitless is implemented on top of Git, one can always fall back on Git. How is Gitless better? Easy to save changes with Gitless There is flexible commit command in Gitless that makes it easy to save changes to the repository very easy. Also, another advantage in Gitless is that it does not have a staging area. One can also change the classification of any file to untracked, tracked, or ignored, even if the file doesn’t exist at the head. [caption id="attachment_24381" align="aligncenter" width="653"]                                                                     Source: Gitless [/caption] Branching In Gitless, a branch is a separate line which is independent of development and keeps its working files separate from others. Whenever a user switches to a different branch, the contents of the working directory are saved, and the ones corresponding to the branch the user is switching to, are retrieved. Gitless also remembers the file classification. One need not worry about the uncommitted changes conflicting with the changes in the destination branch. Source: Gitless The below mentioned commands are used in Gitless, which make the working very easy.  gl init  is used for creating an empty repo or creating one from an existing remote repository.  gl track is used for tracking changes to files gl untrack stops tracking changes to files gl checkout helps to check the committed versions of files With gl history, one can view history  gl branch  is used for listing, creating, editing or deleting the branches  gl switch is used for switching branches Read more about Gitless on Gitless’ official website. IntelliJ IDEA 2018.3 is out with support for Java 12, accessibility improvements, GitHub pull requests, and more GitHub Octoverse: The top programming languages of 2018 GitHub now supports the GNU General Public License (GPL) Cooperation Commitment as a way of promoting effective software regulation
Read more
  • 0
  • 0
  • 2434

article-image-linux-is-reverting-the-stibp-support-due-to-major-slowdowns-in-linux-4-20
Bhagyashree R
23 Nov 2018
2 min read
Save for later

Linux is reverting the STIBP support due to major slowdowns in Linux 4.20

Bhagyashree R
23 Nov 2018
2 min read
Linux 4.20 has shown major performance issues and the reason behind this regression was Single Thread Indirect Branch Predictors (STIBP), as shared by Phoronix yesterday. This support is being reverted from the upcoming releases Linux 4.19.4 and 4.14.83 kernel points. Linus Torvalds, the creator of Linux kernel, was also surprised with the performance hit on Linux 4.20 as a result of STIBP introduction. He posted to the kernel mailing list that the performance impact was not communicated before the patches were merged and believes that this should not be enabled by default: “This was marked for stable, and honestly, nowhere in the discussion did I see any mention of just *how* bad the performance impact of this was.  When performance goes down by 50% on some loads, people need to start asking themselves whether it was worth it. It's apparently better to just disable SMT entirely, which is what security-conscious people do anyway.  So why do that STIBP slow-down by default when the people who *really* care already disabled SMT?  I think we should use the same logic as for L1TF: we default to something that doesn't kill performance. Warn once about it, and let the crazy people say "I'd rather take a 50% performance hit than worry about a theoretical issue”.“ The tests done by Michael Larabel also revealed that Linux 4.20 is facing significant performance issues in many workloads, more than some of the earlier Spectre and Meltdown mitigations. This has measurably affected PHP, Python, Java, and many other workloads and even the gaming performance to some extent. The STIBP support for cross-hyperthread Spectre V2 mitigation was backported to the Linux 4.14 and 4.19 LTS series, which is now being reverted. You can find the reverts in Greg Kroah-Hartman’s linux-stable-rc tree:  Source: Phoronix On current Linux 4.20 Git, STIBP still remains in place and a better approach to handle performance issues is being reviewed. Michael Larabel expects that the new patch series will be ready for merging prior to the shipping of Linux 4.20, which is approximately one month’s time. To know more, check out Michael Larabel’s post on Phoronix: Linux Stable Updates Are Dropping The Performance-Pounding STIBP. Read Next Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix Red Hat releases Red Hat Enterprise Linux 8 beta; deprecates Btrfs filesystem Soon, RHEL (Red Hat Enterprise Linux) won’t support KDE
Read more
  • 0
  • 0
  • 3141
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-gitlab-11-5-released-with-group-security-and-operations-focused-dashboard-control-access-to-gitlab-pages
Amrata Joshi
23 Nov 2018
3 min read
Save for later

GitLab 11.5 released with group security and operations-focused dashboard, control access to GitLab pages

Amrata Joshi
23 Nov 2018
3 min read
Yesterday, the team at Gitlab released GitLab 11.5. GitLab, an application for the DevOps lifecycle helps the developer teams work together efficiently to secure their code. Group Security Dashboard and Operations-Focused Dashboard To strengthen security, the security teams need to have access to information about the security status of all their projects. It is important for them to understand what the most important task is to take up next. It is also equally important for directors of security in any organization, as they need to have a high-level view of possible critical issues which might affect the development. GitLab 11.5 introduced a new Group Security Dashboard launched at the group level. This security dashboard gives a summary of all the SAST (Static Application Security testing) vulnerabilities in the projects in a particular group, and also provides a list of actionable entries that could be used for starting a remediation process. This dashboard also has a new look and has new visualizations. The goal is to have a single tool that security teams can use instead of using multiple tools. GitLab 11.5 also comes with a new, operations-focused dashboard, which is responsible for providing a summary of the key operational metrics for each project where a user is interested. This dashboard comes with some interesting features, including, the most recent commit, time since the last deployment, and active alerts. A user can also set this dashboard as their preferred homepage. GitLab 11.5 brings control access to Gitlab Pages GitLab Pages is a feature that helps users to publish static websites directly from a repository in GitLab. GitLab Pages is also used to serve static content on the web easily. GitLab 11.5 brings control access to GitLab Pages. The access control permissions applied to issues and code can now also be applied to static webpages. This way the access can be restricted and given only to those permitted by the user. The users who lack permission will get a 404 when visiting the link for those webpages. Knative is a Kubernetes-based platform used for building, deploying, and managing modern serverless workloads. With GitLab 11.5, users can deploy Knative to their existing Kubernetes cluster by using the GitLab Kubernetes integration. Tasks such as routing and managing traffic, source-to-container builds, and scaling-to-zero have now become easy. GitLab 11.5 comes with improvements to Issue Boards cards Issue Boards is central place of collaboration in GitLab, where teams can organize, and view planned and ongoing work. In GitLab 11.5, issue cards have been redesigned and it now shows relevant information in a simple and organized manner. The issue cards now show the issue title, confidentiality, time-tracking information, due date, labels, weight, and assignee. To read full GitLab updates, check out the official post by GitLab. GitLab 11.4 is here with merge request reviews and many more features GitLab 11.3 released with support for Maven repositories, protected environments and more  GitLab raises $100 million, Alphabet backs it to surpass Microsoft’s GitHub
Read more
  • 0
  • 0
  • 12185

article-image-researchers-discover-a-new-rowhammer-attack-eccploit-that-bypasses-error-correcting-code-protections
Savia Lobo
23 Nov 2018
4 min read
Save for later

Researchers discover a new Rowhammer attack, ‘ECCploit’ that bypasses Error Correcting Code protections

Savia Lobo
23 Nov 2018
4 min read
Yesterday, researchers from the Vrije Universiteit Amsterdam’s VUSec group announced that the new Rowhammer attack, known as ECCploit, bypasses ECC protections built into several widely used models of DDR3 chips. The researchers in their paper titled, ‘Exploiting Correcting Codes: On the Effectiveness of ECC Memory Against Rowhammer Attacks’ write, “Many believed that Rowhammer on ECC memory, even if plausible in theory, is simply impractical. This paper shows this to be false: while harder, Rowhammer attacks are still a realistic threat even to modern ECC-equipped systems.” The Rowhammer attack, discovered way back in the year 2015, exploits unfixable physical weakness in the silicon of certain types of memory chips and transforms the data they store. As a defense against this attack, researchers developed an enhancement known as error-correcting code (ECC). This ECC, present in higher-end chips, was believed to be an absolute defense against potentially disastrous bitflips that changed 0s to 1s and vice versa. “Rowhammer can flip bits in ways that have major consequences for security, for instance, by allowing an untrusted app to gain full administrative rights, breaking out of security sandboxes or virtual-machine hypervisors, or rooting devices running the vulnerable DIMM.” Kaveh Razavi, one of the VUSec researchers who developed the exploit, said, “ECCploit shows for the first time that it is possible to mount practical Rowhammer attacks on vulnerable ECC DRAM.” Working of ECC ECC uses memory words for storing redundant control bits next to the data bits inside the DIMMs. Further, CPUs use these words to quickly detect and repair flipped bits. The prime motive of ECC design was to protect against a naturally occurring phenomenon in which cosmic rays flip bits in newer DIMMs. Post Rowhammer’s appearance in 2015, ECC rose to popularity as it was arguably the most effective defense against the attack. However, there are some limitations to ECC, which includes: ECC generally adds enough redundancy to repair single bitflips in a 64-bit word When two bitflips occur in a word, it will cause the underlying program or process to crash When three bitflips occur in the right places, ECC can be completely bypassed According to Ars Technica, “The VUSec researchers spent months reverse-engineering the process, in part by using syringe needles to inject faults into chips and subjecting chips to a cold-boot attack. By extracting data stored inside the supercooled chips as they experienced the errors, the researchers were able to learn how computer memory controllers processed ECC control bits.” Following is a video of the researchers using the cold-boot technique https://youtu.be/NrYWVEjEfw0 The researchers thus demonstrated that ECC merely slows down the Rowhammer attack and is not enough to stop it. They tested ECCploit on four hardware platforms, including: AMD Opteron 6376 Bulldozer (15h) Intel Xeon E3-1270 v3 Haswell Intel Xeon E5-2650 v1 Sandy Bridge Intel Xeon E5-2620 v1 Sandy Bridge They said, “they tested several memory modules from different manufacturers". They also confirmed that a significant amount of Rowhammer bitflips occurred in a type of DIMM tested by a different team of researchers. Are all DDR chips affected? The researchers haven't demonstrated that ECCploit works against ECC in DDR4 chips, a newer type of memory chip favored by higher-end cloud services. The paper also doesn’t show that ECCploit can penetrate hypervisors or secondary Rowhammer defenses.  There's also no indication that ECCploit works reliably against endpoints typically used in cloud environments such as AWS or Microsoft Azure. To know more about this in detail, visit Ars Technica blog. Seven new Spectre and Meltdown attacks found Security issues in nginx HTTP/2 implementation expose nginx servers to DoS attack Weaponizing PowerShell with Metasploit and how to defend against PowerShell attacks [Tutorial]
Read more
  • 0
  • 0
  • 9015

article-image-what-if-buildings-of-the-future-could-compute-european-researchers-make-a-proposal
Prasad Ramesh
23 Nov 2018
3 min read
Save for later

What if buildings of the future could compute? European researchers make a proposal.

Prasad Ramesh
23 Nov 2018
3 min read
European researchers have proposed an idea for buildings that could compute. In the paper On buildings that compute. A proposal published this week, they have made proposals to integrate computation in various parts of a building, from cement and bricks to paint. What is the idea about? Smart homes today are made up of several individual smart appliances. They may work individually or be interconnected via a central hub. “What if intelligent matter of our surrounding could understand us humans?” The idea is that the walls of a building in addition to supporting the roof, had more functionality like sensing, calculating, communicating, and even producing power. Each brick/block could be thought of as a decentralized computing entity. These blocks could contribute to a large-scale parallel computation. This would transform a smart building into an intelligent computing unit in which people can live in and interact with. Such smart buildings that compute, as the researchers say can potentially offer protection from crime, natural disasters, structural damage within the building, or simply send a greeting to the residing people. When nanotechnology meets embedded computing The proposal involves using nanotechnology to embed computation and sensing directly to the construction materials. This includes intelligent concrete blocks and using stimuli-responsive smart paint. The photo sensitive paint would sense the internal and external environment. A nano-material infused concrete composition would sense the building environment to implement parallel information processing on a large scale. This will result in distributed decision making. The result is a building which can be seen as a huge parallel computer consisting of computing concrete blocks. The key concepts used for the idea of smart buildings that compute are functional nanoparticles which are photo-, chemo- and electro-sensitive. A range of electrical properties will span all the electronic elements mixed in a concrete. The concrete is used to make the building blocks which are equipped with processors. These processors gather information from distributed sensory elements, helps in decision making, location communication and enables advanced computing. The blocks together form a wall which forms a huge parallel array processor. They envision a single building or a small colony to turn into a large-scale universal computing unit.  This is an interesting idea, bizarre even. But the practicality of it is blurry. Can its applications justify the cost involved to create such a building? There is also a question of sustainability. How long will the building last before it has to be redeveloped? I for one think that doing so will almost certainly undo the computational aspect from it. For more details, read the research paper. Home Assistant: an open source Python home automation hub to rule all things smart The iRobot Roomba i7+ is a cleaning robot that maps and stores your house and also empties the trash automatically. Cortana and Alexa become best friends: Microsoft and Amazon release a preview of this integration
Read more
  • 0
  • 0
  • 18022

article-image-google-hints-shutting-down-google-news-over-eus-implementation-of-article-11-or-the-link-tax
Bhagyashree R
23 Nov 2018
3 min read
Save for later

Google hints shutting down Google News over EU’s implementation of Article 11 or the “link tax”

Bhagyashree R
23 Nov 2018
3 min read
Last week, The Guardian reported that Google may shut down Google News in Europe if the “link tax” is implemented in a way that the company has to pay news publishers. According to the “link tax” or Article 11, news publishers must receive a fair and proportionate remuneration for their publications by the information society service providers. The vice president of Google News, Richard Gingras expressed his concern regarding the proposal and told The Guardian that the discontinuation of the news service in Europe will depend on the final verdict, “We can’t make a decision until we see the final language.” The first draft of the “link tax”, or more formally, Directive on Copyright in the Digital Single Market was first issued in 2016. After several revisions and discussions, it was approved by the European Parliament on 12 September 2018. And, now a formal trilogue discussion with the European Commission, the Council of the European Union and the European Parliament is initiated to reach the final decision. The conclusion of this discussion is expected to be announced in January 2019. Another part of the proposed directive, Article 13 is designed to ensure content creators are paid for material uploaded to sites such as the Google-owned YouTube. Article 11 and Article 13 have faced a lot of criticism since the directive was proposed. Mr. Gingras further said that when in 2014 the Spanish government attempted to charge a link tax on Google, the company responded by shutting down Google News in the country. It also removed Spanish newspapers from the service internationally. This resulted in a tremendous fall in traffic to Spanish news websites. “We would not like to see that happen in Europe,” Gingras added. Julia Reda, an MEP, however, believes that this “link tax” will not be as extreme as the link tax implemented in Spain, where Google was required to pay publishers even if they didn’t want to be paid. “What we think is more likely is that publishers will have the choice to ask for Google to pay or not," she told WIRED. To know more in detail about Google’s response towards the link tax, read the full story on The Guardian. YouTube’s CBO speaks out against Article 13 of EU’s controversial copyright law German OpenStreetMap protest against “Article 13” EU copyright reform making their map unusable BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration”
Read more
  • 0
  • 0
  • 19195
article-image-unity-has-won-the-technology-and-engineering-emmy-award-for-excellence-in-engineering-creativity
Amrata Joshi
22 Nov 2018
2 min read
Save for later

Unity has won the Technology and Engineering Emmy Award for excellence in engineering creativity

Amrata Joshi
22 Nov 2018
2 min read
Yesterday, Unity Technologies won its first Technology and Engineering Emmy Award for excellence in engineering creativity. Unity has won this award for their collaboration with Disney Television Animation on the broadcast-quality shorts Baymax Dreams. The National Academy of Television Arts and Sciences (NATAS), a service organization for advancement in arts and sciences of television, have acknowledged the efforts taken by Unity in Baymax Dreams. Earlier this month Unity also won an Emmy for 3D Engine Software for the Production of Animation. Baymax Dreams series is based on the story of a 14-year old tech genius Hiro and his robot, named, Baymax. The characterization and the visual effects are mesmerizing. https://www.youtube.com/watch?v=DpuUnNLZf5k Different Unity teams from animation, films, Virtual Reality (VR), and gaming, united for this creative project. They designed the entire workflow and matched it as per the story and the vision of the director. Graphics Engineer, John Parsaie used Unity’s High Definition Render Pipeline (HDRP) to create materials like Baymax’s emissive ‘night light’. He also worked with the Unity artist, Keijiro Takahashi on the voxelization effect. This effect can be seen, when Baymax first enters his dream state. The characters and artwork were built and reviewed in VR. This, of course, helped in increasing clarity and imagining the visuals better. This also enabled experimenting with different styles and formats. The team at Unity made use of multiple methods, including, Unity’s multi-track sequencer, Timeline and Cinemachine’s suite of smart cameras and Post-Processing Stack v2. These were used for layout, lighting, and compositing. To read more about this news, check out the official blog post by Unity. The Technology and Engineering Emmy® Awards in partnership with the NAB show (National Association of Broadcasters) will be held in Las Vegas on Sunday, April 7, 2019. Exploring shaders and materials in Unity 2018.x to develop scalable mobile games Building your own Basic Behavior tree in Unity [Tutorial] Getting started with ML agents in Unity [Tutorial]
Read more
  • 0
  • 0
  • 14415

article-image-dav1d-to-release-soon-with-all-features-of-av1-and-better-performance-than-libaom
Bhagyashree R
22 Nov 2018
2 min read
Save for later

dav1d to release soon with all features of AV1, and better performance than libaom

Bhagyashree R
22 Nov 2018
2 min read
After introducing dav1d at the Video Developer Days 2018, the team behind dav1d announced yesterday that they will be releasing its first version very soon. This version will come with features like Film Grain, Super-Res, Scaled References, and other more obscure features of the bitstream. dav1d is a new AV1 cross-platform open source decoder, which is developed with speed and correctness in mind. Currently, the reference decoder (libaom) of AV1 needs major improvements. libaom was basically developed for research purposes, therefore the VideoLAN, VLC, and FFmpeg communities started working on this new decoder, sponsored by the Alliance of Open Media (AOM). What are some improvements and features implemented? dav1d will come with features such as Film Grain, Super-Res, Scaled References, and other more obscure features of the bitstream, which will be available for both 8 and 10 bits depth. Along with that, the developers have improved the public API and the reduced the size of C code. To assure secure decoding for AV1, the team has used fuzz testing. “Then, we've fuzzed the decoder a lot: we are now above 99% of functions covered, and 97% of lines covered on OSS-FUZZ; and we usually fix all the issues in a couple of days. This should assure you a secure decoding for AV1,” Jean-Baptiste Kempf, the president of the VideoLAN non-profit org said in his announcement. Performance comparison dav1d shows amazing performance on AVX2 processors, which means that it covers more than 50% of the CPUs used on the desktop. The developers are working on the SSE and ARM optimizations, which will be completed in the next few weeks. To test its performance, clips were taken from Netflix, Elecard and YouTube and was compared with aomdec. When tested on Haswell it gave an average 2.49x relative decode performance. When tested on a more modern Zen machine, dav1d gave even higher average relative decode performance at 3.49x. Here’s how the global comparison looked like: Source: dav1d You can read more in detail about dav1d on Jean-Baptiste Kempf’s announcement. Presenting dav1d, a new lightweight AV1 decoder, by VideoLAN and FFmpeg YouTube starts testing AV1 video codec format, launches AV1 Beta Playlist A new Video-to-Video Synthesis model uses Artificial Intelligence to create photorealistic videos
Read more
  • 0
  • 0
  • 9016

article-image-clojurecuda-0-6-0-now-supports-cuda-10
Prasad Ramesh
22 Nov 2018
2 min read
Save for later

ClojureCUDA 0.6.0 now supports CUDA 10

Prasad Ramesh
22 Nov 2018
2 min read
ClojureCUDA is a CUDA that supports parallel computations on the GPU with CUDA in the Clojure programming language. With this library, you can access high-performance Computing and GPGPU in Clojure. Installation ClojureCUDA 0.6.0 now has support for the new CUDA 10. To start using it: Install the CUDA 10 Toolkit Update your drivers Update the ClojureCUDA version in project.clj All the existing code should work without requiring any changes. CUDA and libraries CUDA is the most used environment for high-performance computing on NVIDIA GPUs. You can now use CUDA directly from the interactive Clojure REPL without having to wrangle with the C++ toolchain. High-performance libraries like Neanderthal take advantage of ClojureCUDA to deliver speed dynamically to Clojure programs. With these higher-level libraries, you can perform fast calculations with just a few lines of Clojure. You don’t even have to write the GPU code yourself. But writing the lower level GPU code is also not so difficult in an interactive Clojure environment. ClojureCUDA features The ClojureCUDA library has features like high performance and optimization for Clojure. High-performance computing CUDA enables various hardware optimizations on NVIDIA GPUs. Users can access the leading CUDA libraries for numerical computing like cuBLAS, cuFFT, and cuDNN. Optimized for Clojure ClojureCUDA is built with a focus on Clojure. The interface and functions fit into a functional style. They are also aligned to number crunching with CUDA. Reusable The library closely follows the CUDA driver API. Users translate examples from best CUDA books easily. Free and Open Source It is licensed under the Eclipse Public License (EPL) which is the same license used for Clojure. ClojureCUDA and other libraries by uncomplicate are open source. You can choose to contribute on GitHub or donate on Patreon. For more details and code examples, visit the dragan Blog. Clojure 1.10.0-beta1 is out! Stable release of CUDA 10.0 out, with Turing support, tools and library changes NVTOP: An htop like monitoring tool for NVIDIA GPUs on Linux
Read more
  • 0
  • 0
  • 13059
article-image-mozilla-criticizes-eus-terrorist-content-regulation-proposal-says-its-a-threat-to-user-rights
Sugandha Lahoti
22 Nov 2018
4 min read
Save for later

Mozilla criticizes EU’s terrorist content regulation proposal, says it’s a threat to user rights

Sugandha Lahoti
22 Nov 2018
4 min read
In a new blog post on open Internet policy initiatives, Mozilla has criticized EU’s terrorist content regulation proposal which was released in September. They have termed it as a threat to ‘the ecosystem and user’s rights’. Mozilla had also released a post when the bill was proposed saying that it ‘threatens internet health in Europe.” In September, EU proposed a bill to tackle the spread of ‘terrorist’ content on the internet. Per this bill, government-appointed authorities will have the unilateral power to suppress speech on the internet. [box type="shadow" align="" class="" width=""] The regulation proposes a removal order which can be issued as an administrative or judicial decision by a competent authority in a Member State. In such cases, the hosting service provider is obliged to remove the content or disable access to it within one hour. In addition, the Regulation harmonizes the minimum requirements for referrals sent by Member States’ competent authorities and by Union bodies (such as Europol) to hosting service providers to be assessed against their respective terms and conditions. Finally, the Regulation requires hosting service providers, where appropriate, to take proactive measures proportionate to the level of risk and to remove terrorist material from their services, including by deploying automated detection tools.[/box] Source: European Commission Mozilla has previously condemned the bill saying, “It would undermine due process online; compel the use of ineffective content filters; strengthen the position of a few dominant platforms while hampering European competitors; and, ultimately, violate the EU’s commitment to protecting fundamental rights.” In the recent blog post, they have further addressed this issue pointing out worrying elements from the proposal: "The definition of ‘terrorist’ content is extremely broad, opening the door for a huge amount of over-removal (including the potential for discriminatory effect) and the resulting risk that much lawful and public interest speech will be indiscriminately taken down. Government-appointed bodies, rather than independent courts, hold the ultimate authority to determine illegality, with few safeguards in place to ensure these authorities act in a rights-protective manner. The aggressive one hour timetable for removal of content upon notification is barely feasible for the largest platforms, let alone the many thousands of micro, small and medium-sized online services whom the proposal threatens; Companies could be forced to implement ‘proactive measures’ including upload filters, which, as we’ve argued before, are neither effective nor appropriate for the task at hand. The proposal risks making content removal an end in itself, simply pushing terrorist off the open internet rather than tackling the underlying serious crimes.” A hacker news user agreed with Mozilla but considered themselves lucky that the proposal was yet to be sanctioned. “This proposal is very bad. But luckily it is only a proposal. The council and parliament will still vote for this before it becomes European law. Both bodies will likely oppose, and the proposal will be significantly amended.” Mozilla has also said that they will continue to scrutinize, deliberate, and clarify how to protect their users and the internet ecosystem. A hacker news user said he’s happy “Mozilla's on top of this early in the process. Let's hope they manage to remove the problematic parts they outline in this post.” Some people say the EU was unnecessarily ‘bashed’ for this. “I don't see how the EU as an institution is bashed for this. This is a similar process as occurs in any other member state and other democracies. Not to mention the US, with its secret laws and national security letters. My personal opinion is that illegal content (CP, inciting violence) should be moderated quickly, where failure to act has big consequences. What I don't like about the proposal is that it is enforced by governments, and not some judiciary body. I hope the council and parliament will amend the proposal in such a way this is reflected in a final law.” “I don't see how the EU as an institution is bashed for this. I think people are seeing a general trend of internet laws and bashing their creators. One could argue that this stage of the process is where bashing should occur. When it did with other ridiculous legislation, on both sides of the Atlantic, nobody excused the institutions making the suggestions. To many, myself included, this trend has to stop and sadly there isn't enough bashing to curb it, especially as there are so many cheering it on.” Mozilla v. FCC: Mozilla challenges FCC’s elimination of net neutrality protection rules. Is Mozilla the most progressive tech organization on the planet right now? Senator Ron Wyden’s data privacy law draft can punish tech companies that misuse user data
Read more
  • 0
  • 0
  • 10653

article-image-mlflow-0-8-0-released-with-improved-ui-experience-and-better-support-for-deployment
Savia Lobo
22 Nov 2018
3 min read
Save for later

MLflow 0.8.0 released with improved UI experience and better support for deployment

Savia Lobo
22 Nov 2018
3 min read
MLflow 0.8.0 released with improved UI experience and better support for deployment Last week, the team at Databricks released MLflow 0.8.0. MLflow, an open source platform used for managing end-to-end machine learning lifecycle. It is used for tracking experiments and managing and deploying models from a variety of ML libraries. It is also responsible for packaging ML code in a reusable and reproducible form in order to share the same with other data scientists. MLflow 0.8.0 features In MLflow 0.8.0, the SageMaker and pyfunc server support the ‘split’ JSON format, which helps the client to specify the order of columns. With MLflow 0.8.0, the server can now pass the gunicorn option. This is because as gunicorn uses threads instead of processes and saves memory space. This version also brings in TensorFlow 1.12 support. With this version, there’s no need of loading Keras module at predict time. Major change In MLflow 0.8.0 version, [CLI] mlflow sklearn server has been removed in favor of mlflow pyfunc serve, as it takes the same arguments but works against any pyfunc model. Major improvements in MLflow 0.8.0 This version includes various new features including improved UI experience and support for deploying models directly to the Azure Machine Learning Service Workspace. Improved MLflow UI Experience In this version, the metrics and parameters are by default grouped together in a single tabular column in order to avoid an explosion of columns. The users can customize their view by sorting the parameters and metrics. They can click on each parameter or metric in order to view them in a separate column. This makes the user experience better. The runs which are nested inside other runs can now be grouped by their parent-run. They can also be expanded or collapsed altogether. By calling mlflow.start_run or mlflow.run, a run can be nested. Though MLflow gives each run a UUID by default, one can also now assign a name to a run and also can edit the names. It makes the process easy as it is easier to remember the name than a number. There’s no need to reconfigure the view each time one uses it, as the MLflow UI remembers the filters, sorting and column setup done in browser local storage. Support for Deployment of models to Azure ML Service In this version, the Microsoft Azure Machine Learning deployment tool has been modified for deploying MLflow models packaged as Docker containers. One can use the mlflow.azureml module to package a python_function model into an Azure ML container image. Further, this image can be deployed to the Azure Kubernetes Service (AKS) and the Azure Container Instances (ACI) platforms. Major bug fixes The server works better in this version even when the environment and run files are corrupted. The Azure Blob Storage artifact repo now supports Windows paths. In the previous version, deleting the default experiment caused recreation of the same. But with MLflow 0.8.0 this problem has been fixed. Read more about this news on Databricks’ blog. Introducing EuclidesDB, a multi-model machine learning feature database Google releases Magenta studio beta, an open source python machine learning library for music artists Technical and hidden debts in machine learning – Google engineers’ give their perspective
Read more
  • 0
  • 0
  • 2267
Modal Close icon
Modal Close icon