Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-improbable-says-unity-blocked-spatialos-unity-responds-saying-it-has-shut-down-improbable-and-not-spatial-os
Sugandha Lahoti
11 Jan 2019
4 min read
Save for later

Improbable says Unity blocked SpatialOS; Unity responds saying it has shut down Improbable and not Spatial OS

Sugandha Lahoti
11 Jan 2019
4 min read
A fresh drama has emerged between Unity and Improbable. According to yesterday’s blog post by the SpatialOS creator, Improbable says that Unity has blocked SpatialOS based on a recent change in Unity’s terms of service (clause 2.4). Unity has contested this stating Improbable's blog post was misleading and added that they have terminated their relationship with Improbable without affecting anyone using SpatialOS. What did Improbable say? Unity had updated their terms of service on Dec 5 and then informed Improbable directly on Jan 9 that their service has been revoked on the Unity’s game engine. Per the blog, “all existing SpatialOS games using Unity, including production games and in development games of all developers, are now in breach of Unity’s license terms.” The blog also states that Unity has put a stop for Improbable to continue working with the Unity engine, affecting their ability to support games. The blog post disapproved Unity’s decision stating that Unity’s actions have done harm to projects across the industry, especially affecting vulnerable or small scale developers. Moreover, this is a threat to games that have been funded based on the promise of SpatialOS to deliver next-generation multiplayer as their choice of game engine. The improbable team has also stated that going further they would be helping developers using SpatialOS with Unity to finish, release and operate their games and set up an emergency fund. They are also fully open-sourcing the code of SpatialOS Game Development Kit for Unity, under the MIT license. How did Unity respond? Unity has termed Improbable's blog as ‘incorrect’ stating that they have “terminated their relationship with Improbable due to a failed negotiation with them after they violated Unity’s Terms of Service. However, anyone using SpatialOS will not be affected.” Unity also assures that even if a game developer runs a Unity-based game server on their own servers or generic cloud instances (like GCP, AWS or Azure), they are covered by Unity’s EULA. “From a technical standpoint, this is what our clarification on our TOS means: if you want to run your Unity-based game-server, on your own servers, or a cloud provider that provides you instances to run your own server for your game, you are covered by our EULA. We will support you as long as the server is running on a Unity supported platform.” Unity blocked Improbable because the company was making unauthorized and improper use of Unity’s technology and name in connection with the development, sale, and marketing of its own products. Early last year, they informed Improbable in person that they were in violation of Unity’s Terms of Service. Then, after six months, Unity informed Improbable about the violation in writing. Seeing no changes, Unity decided to take strict action by turning off Improbable’s Unity Editor license keys, about two weeks ago. Unity says they are trying to resolve the dispute with Improbable without affecting developers. SpatialOS Developers will receive support for any outstanding questions or issues directly at support@unity3d.com. What about Unity’s TOS Clause 2.4? Unity’s updated clause states that they are prohibiting "streaming or broadcasting so that any portion of the Unity Software is primarily executed on or simulated by the cloud or a remote server and transmitted over the Internet or other networks to end user devices..." This is alarming for Unity asset and service providers and developers. As explained by a gamedev.net user, this could mean that “any kind of processing offload for entity state occurring on a server or cloud provider (such as SpatialOS) is no longer allowed. As such, developers who planned to use Unity in any kind of distributed network capacity may find themselves in a difficult situation.” The creator of Epic Games, Tim Sweeney, has reacted harshly to this clause as well. "We specifically make the UE4 EULA apply perpetually so that when you obtain a version under a given EULA, you can stay on that version and operate under that EULA forever if you choose." https://twitter.com/TimSweeneyEpic/status/1083407460252217346 This battle has definitely added a boost to Unreal Engine’s popularity. https://twitter.com/patrickol/status/1083476747700576256 https://twitter.com/hippowombat/status/1083581963422691329 Epic Games have also said that it has partnered with Improbable to establish a $25 million fund to "assist developers who are left in limbo by the new engine and service incompatibilities that were introduced." Unity and Baidu collaborate for simulating the development of autonomous vehicles Unity 2018.3 is here with improved Prefab workflows, Visual Effect graph and more Unity ML-Agents Toolkit v0.6 gets two updates: improved usability of Brains and workflow for Imitation Learning
Read more
  • 0
  • 0
  • 14897

article-image-an-ai-startup-now-wants-to-monitor-your-kids-activities-to-help-them-grow-securly
Natasha Mathur
10 Jan 2019
3 min read
Save for later

An AI startup now wants to monitor your kids’ activities to help them grow ‘securly’

Natasha Mathur
10 Jan 2019
3 min read
AI is everywhere and now it’s helping monitor kid’s activities to maintain safety across different schools and prevent school shooting incidents. An AI startup called Securly, co-founded by Vinay Mahadik and Bharath Madhusudan focusses on student safety with the help of features such as web filtering, cyberbullying monitoring, and self-harm alerts. Its cloud-based web filters maintain an age-appropriate internet, monitors bullying and ensures that schools remain CIPA-compliant. Another feature called ‘auditor’ in Securly uses Google’s Gmail service to send alerts when the risk of bullying or self-harm is detected. There’s also a tipline feature that sends anonymous tips to students over phone, text, or email.   The machine learning algorithms used by Securly are trained by safety specialists using safe and unsafe content. So, once the algorithms find any content as disturbing, the 24×7 student safety experts evaluate further context behind the activity and reach out to schools and authorities as needed. Securly raised $16m in Series B round of funding last month. The company managed to raise $24 million and was led by Defy Partners. The company now wants to use these funds to amp up their research and development further in K-12 safety. Moreover, Mahadik is also focussing on technologies that can be used across schools without hampering kids’ privacy. He told Forbes, ”You could say show me something that happened on the playground where a bunch of kids punched or kicked a certain kid. If you can avoid personally identifying kids and handle the data responsibly, some tech like this could be beneficial”. Securly currently has over 2,000 paid school districts using its free Chromebook filtering and email auditing services. However, public reaction to the news isn’t entirely positive as many people are criticizing the startup of shifting the focus from the real issue ( i.e. providing kids with much-needed counseling and psychological help, implementing family counseling programs, etc ) and is instead promoting tracking every kid’s move to make sure they don’t ever falter. Securly is not the only surveillance service that has received heavy criticism. Predictim, an online service that uses AI to analyze risks associated with a babysitter, also got under the spotlight, over the concerns of its biased algorithms and for violating the privacy of the babysitters. https://twitter.com/ashedryden/status/1083084280736202752 https://twitter.com/ashedryden/status/1083087232897028096 https://twitter.com/jennifershehane/status/1083100079123124224 https://twitter.com/dmakogon/status/1083092624410660865 Babysitters now must pass Perdictim’s AI assessment to be “perfect” to get the job Center for the governance of AI releases report on American attitudes and opinions on AI Troll Patrol Report: Amnesty International and Element AI use machine learning to understand online abuse against women
Read more
  • 0
  • 0
  • 13257

article-image-using-deep-learning-methods-to-detect-malware-in-android-applications
Savia Lobo
10 Jan 2019
5 min read
Save for later

Using deep learning methods to detect malware in Android Applications

Savia Lobo
10 Jan 2019
5 min read
Researchers from the North China Electric Power University have recently published a paper titled, ‘A Review on The Use of Deep Learning in Android Malware Detection’. Researchers highlight the fact that Android applications can not only be used by application developers, but also by malware developers with criminal intention to design and spread malicious applications that can affect the normal work of Android phones and tablets, steal personal information and credential data, or even worse lock the phone and ask for ransom. In this paper, they have explained how deep learning methods can be used as a countermeasure in Android malware detection to fight back malware. Android Malware Detection Techniques Researchers have said that one critical point of mobile phones is that they are a sensor-based event system, which permits malware to respond to approaching SMS, position changes and so forth, increasing the sophistication of automated malware-analysis techniques. Moreover, the apps can use services and activities and integrate varied programming languages (e.g. Java and C++) in one application. Each application is analyzed in the following stages: Static Analysis The static analysis screens parts of the application without really executing them. This analysis incorporates Signature-based, Permission-based and Component-based analysis. The Signature-based strategy draws features and makes distinctive signs to identify specific malware. Hence, it falls short to recognize the variation or unidentified malware. The Permission-based strategy recognizes permission requests to distinguish malware. The Component-based techniques decompile the APP to draw and inspect the definition and byte code connections of significant components (i.e. activities, services, etc.), to identify the exposures. The principal drawbacks of static analysis are the lack of real execution paths and suitable execution conditions. Dynamic Analysis This technique includes the execution of the application on either a virtual machine or a physical device. This analysis results in a less abstract perspective of application than static analysis. The code paths executed during runtime are a subset of every single accessible path. The principal objective of the analysis is to achieve high code inclusion since every feasible event ought to be activated to watch any possible malicious behavior Hybrid Analysis The hybrid analysis technique includes consolidating static and dynamic features gathered from examining the application and drawing data while the application is running, separately. Nevertheless, it would boost the accuracy of the identification. The principal drawback of hybrid analysis is that it consumes the Android system resources and takes a long time to perform the analysis. Use of deep learning in Android malware detection Currently available machine learning has several weaknesses and some open issues related to the use of DL in Android malware detection include: Deep learning lacks transparency to provide an interpretation of the decision created by its methods. Malware analysts need to understand how the decision was made. There is no assurance that classification models built based on deep learning will perform in different conditions with new data that would not match previous training data. Deep learning studies complex correlations within input and output feature with no innate depiction of causality. Deep learning models are not autonomous and need continual retraining and rigorous parameters adjustments. The DL models in the training phase were subjected to data poisoning attacks, which are merely implemented by manipulating the training and instilling data that make a deep learning model to commit errors. In the testing phase, the models were exposed to several attack types including: Adversarial Attacks are where the DL model inputs are the ones that an adversary has invented deliberately to cause the model to make mistakes Evasion attack: Here, the intruder exploits malevolent instances at test time to have them incorrectly classified as benign by a trained classifier, without having an impact over the training data. This can breach system integrity, either with a targeted or with an indiscriminate attack. Impersonate attack: This attack mimics data instances from targets. The attacker plans to create particular adversarial instances to such an extent that current deep learning-based models mistakenly characterize original instances with different tags from the imitated ones. Inversion attack: This attack uses the APIs allowed by machine learning systems to assemble some fundamental data with respect to the target system models. This kind of attack is divided into two types; Whitebox attack and Blackbox attack. The white-box attack implies that an aggressor can loosely get to and download learning models and other supporting data, while the black-box one points to the way that the aggressor just knows the APIs opened by learning models and some observation after providing input. According to the researchers, hardening deep learning models against different adversarial attacks and detecting, describing and measuring concept drift are vital in future work in Android malware detection. They also mentioned that the limitation of deep learning methods such as lack of transparency and being nonautonomous, is to build more efficient models. To know more about this research in detail, read the research paper. Researchers introduce a deep learning method that converts mono audio recordings into 3D sounds using video scenes IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US
Read more
  • 0
  • 0
  • 24030

article-image-aws-introduces-amazon-documentdb-featuring-compatibility-with-mongodb-scalability-and-much-more
Amrata Joshi
10 Jan 2019
4 min read
Save for later

AWS introduces Amazon DocumentDB featuring compatibility with MongoDB, scalability and much more

Amrata Joshi
10 Jan 2019
4 min read
Today, Amazon Web Services (AWS) introduced a MongoDB compatible Amazon DocumentDB designed to provide performance, scalability, and availability needed while operating mission-critical MongoDB workloads. Customers use MongoDB as a document database to retrieve, store and manage semi-structured data. But it is difficult to build performant, highly available applications that can quickly scale to multiple terabytes and thousands of reads- and writes-per-second because of the complexity that comes with setting up MongoDB clusters at scale. https://twitter.com/nathankpeck/status/1083144657591255043 Amazon DocumentDB uses a fault-tolerant, distributed and self-healing storage system that auto-scales up to 64 TB per database cluster. With AWS Database Migration Service (DMS) users can migrate their MongoDB databases which are on-premise or on Amazon EC2 to Amazon DocumentDB for free (for six months) with no downtime. Features of Amazon DocumentDB Compatibility Amazon DocumentDB, compatible with version 3.6 of MongoDB, also implements the Apache 2.0 open source MongoDB 3.6 API. This implementation is possible by emulating the responses that a MongoDB client expects from a MongoDB server, further allowing users to use the existing MongoDB drivers and tools with Amazon DocumentDB. Scalability Storage in the Amazon DocumentDB can be scaled from 10 GB to 64 TB in increments of 10 GB. With this document database service, users don’t have to preallocate storage or monitor free space. Users can choose between six instance sizes (15.25 GiB (Gibibyte) to 488 GiB of memory) and also create up to 15 read replicas. Storage and compute can be decoupled and one can easily scale each one independently as needed. Performance Amazon DocumentDB stores database changes as a log stream which allows users to process millions of reads per second with millisecond latency. This storage model provides an increase in the performance without compromising data durability and further enhance the overall scalability. Reliability Amazon DocumentDB’s 6-way storage replication provides high availability. It can failover from primary to a replica within 30 seconds and also supports MongoDB replica set emulation such that the applications can quickly handle system failure. Fully Managed Amazon DocumentDB is fully managed with fault detection, built-in monitoring, and failover. Users can set up daily snapshot backups, take manual snapshots, or use either one to create a fresh cluster if necessary. It integrates with Amazon CloudWatch, so users can monitor over 20 key operational metrics for their database instances via the AWS Management Console. Secure Users can encrypt their active data, snapshots, and replicas with the KMS (Key Management Service) key while creating Amazon DocumentDB clusters. In this document database service, authentication is enabled by default. For the security of the database, it also uses network isolation with the help of Amazon VPC. According to Infoworld, this news has given rise to few speculations as AWS isn’t promising that its managed service will work with all applications that use MongoDB. But this move by Amazon has now led to a new rivalry. MongoDB CEO and president Dev Ittycheria told Techcrunch, “Imitation is the sincerest form of flattery, so it’s not surprising that Amazon would try to capitalize on the popularity and momentum of MongoDB’s document model. However, developers are technically savvy enough to distinguish between the real thing and poor imitation. MongoDB will continue to outperform any impersonations in the market.” As reported by Geekwire and Techcrunch, Amazon DocumentDB’s compatibility with MongoDB is unlikely to require commercial licensing from MongoDB. https://twitter.com/tomkrazit/status/1083165858891915264 To know more about Amazon DocumentDB, check out Amazon DocumentDB. US government privately advised by top Amazon executive on web portal worth billions to the Amazon; The Guardian reports Amazon Rekognition faces more scrutiny from Democrats and German antitrust probe Amazon re:Invent Day 3: Lamba Layers, Lambda Runtime API and other exciting announcements!
Read more
  • 0
  • 0
  • 13193

article-image-center-for-the-governance-of-ai-releases-report-on-american-attitudes-and-opinions-on-ai
Natasha Mathur
10 Jan 2019
7 min read
Save for later

Center for the governance of AI releases report on American attitudes and opinions on AI

Natasha Mathur
10 Jan 2019
7 min read
Center for the Governance of AI, housed within the University of Oxford, released a report yesterday, titled “Artificial Intelligence: American Attitudes and Trends” based on the findings from a nationally representative survey that they conducted using the survey firm YouGov.  The report talks about the results from the survey that provides an insight into the American public’s attitudes and opinion toward AI as well as AI governance. Let’s have a look at some of the major highlights from the report. Key highlights Majority of the Americans support than oppose AI development The report states that Americans express mixed support towards AI development, with the majority of them supporting the development than opposing it. The survey results showed that a  substantial 41% of the American respondents strongly support the development of AI and a smaller minority (22%) strongly opposes its development. About 28% of the respondents expressed a neutral attitude towards AI development and 10% stated that they do not know. Artificial Intelligence: American Attitudes and Trends Support for AI development varies between gender, race, experience, and education The report states that support of the American public towards AI development is greater among wealthy, educated, male, or those with experience in technology. The Center for the governance of AI performed a multiple linear regression to predict the support of the American public. As per the survey results, a majority of the respondents belonging to the following four subgroups expressed support towards AI development: The ones with four-year college degrees (57%) The ones with an annual household income above $100,000 (59%) Those who have graduated with a computer science or engineering degree (56%) Those with experience in computer science or programming (58%). On the other hand, women (35%), those with a high school degree or less (29%), and the ones with an annual household income below $30,000 (33%) showed less enthusiasm towards AI development. A large majority of Americans want more careful management of AI and robots The report states that a majority of the American public (more than eight in 10) want AI and robot to be carefully managed, while only 6% disagrees. Center for the governance of AI replicated a question from the 2017 Special Eurobarometer, to compare Americans’ attitudes with those of EU residents. They found out that the 82% of those in the U.S  want more careful management of robots and AI, which is not quite far from the EU average, where 88% of the public supports the same notion. Similarly, 6% of Americans don’t support the notion which is quite close to the EU average where 7% disagreed. The report states that a large percentage of respondents in the survey selected the “don’t know” option. Americans consider many AI governance challenges to be important The report states that a majority of the Americans consider AI governance challenges such as prioritizing data privacy and preventing AI-enhanced digital manipulation, etc, of high importance. Respondents of the survey were asked to randomly consider five AI governance challenges out of the given 13. As per the survey results, the AI governance challenges that Americans think most impactful and important for tech companies to tackle include data privacy and AI-enhanced cyber attacks, and surveillance. Artificial Intelligence: American Attitudes and Trends On the other hand, the challenges that are considered on average 7% less likely to be impactful by the Americans include autonomous vehicles, value alignment, bias in using AI for hiring, the U.S.-China arms race, disease diagnosis, and technological unemployment. At last, challenges that are perceived as even more less likely to be impactful includes criminal justice bias and critical AI systems failures. Artificial Intelligence: American Attitudes and Trends Americans see the potential for U.S. China cooperation on certain AI governance challenges As a part of the survey, all the American respondents were assigned three out of five AI governance challenges, on which they see US-China co-operation. The five challenges included: AI cyber attacks against governments, individuals, and organizations. AI-assisted surveillance violates privacy and civil liberties. AI systems that are safe, trustworthy, and aligned with human values. Banning lethal autonomous weapons. Guarantee of a good standard of living for people who are at risk of losing their jobs to automation. The survey results showed that China cooperation on value alignment is perceived to be the most likely (48% mean likelihood) and cooperation to prevent AI-assisted surveillance being the least likely (40% mean likelihood). “In the future, we plan to survey Chinese respondents to understand how they view U.S.-China cooperation on AI and what governance issues they think the two countries could collaborate on”, states the report. Americans don’t think labor market disruptions will increase with time As a part of the survey, respondents were asked to select one out of four conditions based on the likelihood of AI and automation creating more jobs than eliminating over the future time frames of 10 years, 20 years, and 50 years. The survey results showed that on average, American public disagrees with the statement “automation and AI will create more jobs than they will eliminate” more than they agree with it. Also, about a quarter of respondents gave “don’t know” responses. However, respondents’ agreement with the statement increased slightly with the future time frame.                                              Artificial Intelligence: American Attitudes and Trends Americans trust the U.S. military, universities, tech firms, and non-gov organizations the most to build AI The report states that Americans put more trust in tech companies and non-governmental organizations than in governments for the development and use of AI. As a part of the survey, respondents were randomly assigned five actors out of 15 that are not well-known to the public such as NATO, CERN, and OpenAI. Respondents were also asked how much confidence they have in each of these actors to build AI and were again randomly assigned five out of 15 actors. As per the results of the survey, Americans consider university researchers and the U.S. military as the most trusted groups to develop AI. Half of the Americans responded with a “great deal” or “fair amount” of confidence for this group. Americans expressed slightly less confidence when it came to tech companies, non-profit organizations, and American intelligence organizations. In general, the American public finds more confidence in non-governmental organizations as opposed to the governmental ones. Artificial Intelligence: American Attitudes and Trends 41% of the American population expressed a “great deal” or even a “fair amount” of confidence in “tech companies,” as compared to the 26% who feel that way about the U.S. federal government. American Public has more trust in intergovernmental research organizations CERN), the Partnership on AI, and non-governmental scientific organizations (e.g., AAAI). Moreover, about one in five respondents in a survey selected a “don’t know” response. The surveys were conducted between June 6 and 14, 2018, where a total of 2,000 American adults (18+) completed the survey. The analysis of the survey was pre-registered on the Open Science Framework.  “Supported by a grant from the Ethics and Governance of AI Fund, we intend to conduct more extensive and intensive surveys in the coming years, including of residents in Europe, China, and other countries”, states the report. AI Now Institute releases Current State of AI 2018 Report IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others Troll Patrol Report: Amnesty International and Element AI use machine learning to understand online abuse against women
Read more
  • 0
  • 0
  • 8552

article-image-the-angular-7-2-1-cli-release-fixes-a-webpack-dev-server-vulnerability-supports-typescript-3-2-and-angular-7-2-0-rc-0
Bhagyashree R
10 Jan 2019
2 min read
Save for later

The Angular 7.2.1 CLI release fixes a webpack-dev-server vulnerability, supports TypeScript 3.2 and Angular 7.2.0-rc.0

Bhagyashree R
10 Jan 2019
2 min read
Today, Minko Gechev, an engineer in the Angular team at Google announced the release of Angular CLI 7.2.1. This release fixes a webpack-dev-server vulnerability and also comes with support for multiselect list prompt, TypeScript 3.2, and Angular 7.2.0-rc.0. https://twitter.com/mgechev/status/1083133079579897856 Understanding the webpack-dev-server vulnerability The npm install command was showing the Missing Origin Validation vulnerability because webpack-dev-server versions before 3.1.10 are missing origin validation on the websocket server. A remote attacker can take advantage of this vulnerability to steal a developer’s code as the origin of requests to the websocket server, which is used for Hot Module Replacement (HMR) are not validated. Other updates in Angular 7.2.1 CLI Several updates and bug fixes were listed in the release notes of Angular CLI’s GitHub repository. Some of them are: Support is added for multiselect list prompt Support is added for TypeScript 3.2 and Angular 7.2.0-rc.0 Optimization options are updated Warnings are added for overriding flags in arguments lintFix is added to several other schematics `resourcesOutputPath` is added to the schema to define where style resources will be placed, relative to outputPath. The architect command project parsing is improved Prompt support is added using Inquirer Jobs API is added Directly loading component templates is supported Angular 7 is now stable Unit testing Angular components and classes [Tutorial] Setting up Jasmine for Unit Testing in Angular [Tutorial]
Read more
  • 0
  • 0
  • 20137
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-black-hat-hackers-used-ipmi-cards-to-launch-junglesec-ransomware-affects-most-of-the-linux-servers
Savia Lobo
10 Jan 2019
3 min read
Save for later

Black Hat hackers used IPMI cards to launch JungleSec Ransomware, affects most of the Linux servers

Savia Lobo
10 Jan 2019
3 min read
Unsecured IPMI (Intelligent Platform Management Interface) cards are preparing a gateway for the JungleSec ransomware that affected multiple Linux servers. The ransomware attack was originally reported in early November 2018. Victims were seen using the Windows, Linux, and Mac; however, there were no traces of how they were being infected. The Black Hat hackers have been using the IPMI cards to breach access and install the JungleSec ransomware, which encrypts data and demands a 0.3 bitcoin payment (about $1,100) for the unlock key. IPMI, a management interface, is built into server motherboards or installed as an add-on card. This enables administrators to remotely manage the computer, power on and off the computer, get system information, and get access to a KVM that gives one remote console access. The IPMI is also useful for managing servers, especially when renting servers from another company at a remote collocation center. However, if the IPMI interface is not properly configured, it could allow attackers to remotely connect to and take control of servers using default credentials. Bleeping Computers said they have “spoken to multiple victims whose Linux servers were infected with the JungleSec Ransomware and they all stated the same thing; they were infected through unsecured IPMI devices”. Bleeping Computers first reported this story on Dec 26 indicating that the hack only affected Linux servers. The attackers installed the JungleSec ransomware through the server's IPMI interface. In the conversations that Bleeping computers had with two of the victims, one victim said, “that the IPMI interface was using the default manufacturer passwords.” The other victim stated that “the Admin user was disabled, but the attacker was still able to gain access through possible vulnerabilities.” Once the attackers were successful in gaining access to the servers, the attackers would reboot the computer into single user mode in order to gain root access. Once in single user mode, they downloaded and compiled the ‘ccrypt’ encryption program. In order to secure the IPMI interface, the first step is to change the default password as most of these cards come with default passwords Admin/Admin. “Administrators should also configure ACLs that allow only certain IP addresses to access the IPMI interface. In addition, IPMI interfaces should be configured to only listen on an internal IP address so that it is only accessible by local admins or through a VPN connection”, Bleeping computer reports. The report also includes a tip from Negulescu--not specific to IPMI interfaces--which suggests adding a password to the GRUB bootloader. Doing so will make it more difficult, if not impossible, to reboot into single user mode from the IPMI remote console. To know more about this news in detail head over to Bleeping Computers’ complete coverage. Go Phish! What do thieves get from stealing our data? Hackers are our society’s immune system – Keren Elazari on the future of Cybersecurity Sennheiser opens up about its major blunder that let hackers easily carry out man-in-the-middle attacks
Read more
  • 0
  • 0
  • 11336

article-image-tls-comes-to-google-public-dns-with-support-for-dns-over-tls-connections
Prasad Ramesh
10 Jan 2019
2 min read
Save for later

TLS comes to Google public DNS with support for DNS-over-TLS connections

Prasad Ramesh
10 Jan 2019
2 min read
In a blog post yesterday, Google announced that their public DNS will now support transport layer security (TLS). Google DNS Google’s public Domain Name Service (DNS) is the world’s largest address resolver. The service allows anyone using it to convert a human readable domain name into addresses used by browsers. Similar to search results, domains visited by DNS can also expose sensitive information. With DNS-over-TLS, users can add security to queries between devices and Google public DNS. Google DNS-over-TLS The need for security from forged websites and surveillance has grown over the years. The DNS-over-TLS protocol used contains a standard way to secure and maintain privacy of DNS traffic between users and the resolvers. Users can secure connections to Google Public DNS with TLS. It is the same technology that makes HTTPS connections secure. The DNS-over-LTS specifications are implemented according to the RFC 7766 recommendations. Doing so minimizes the overhead of using TLS, supports TLS 1.3, TCP fast open, and pipelining multiple queries over a single connection. This is deployed Google’s own infrastructure which they claim provides reliable and scalable management for the DNS-over-TLS connections. Enabling DNS-over-TLS connections DNS-over-TLS can be used by Android 9 pie users. Linux users can use the stubby resolver to communicate with the DNS-over-TLS service. You can create an issue if you are facing one. A comment from Hacker news says: “This is a DNS provided by Google, a company that earns money by analysing user data. If you want privacy, run your own DNS.” But Google has stated in their guides that they do not store any personally identifiable information long term. Cloudflare’s 1.1.1.1 DNS service is now available as a mobile app for iOS and Andro Root Zone KSK (Key Sign Key) Rollover to resolve DNS queries was successfully completed Mozilla’s new Firefox DNS security updates spark privacy hue and cry
Read more
  • 0
  • 0
  • 3281

article-image-facebook-app-is-undeletable-on-samsung-phones-and-can-possibly-track-your-movements-reports-bloomberg
Sugandha Lahoti
10 Jan 2019
2 min read
Save for later

Facebook app is undeletable on Samsung phones and can possibly track your movements, reports Bloomberg

Sugandha Lahoti
10 Jan 2019
2 min read
2018 has been really bad for Facebook in terms of privacy lawsuits, and data-stealing allegations surrounding the company. As #DeleteFacebook took rounds on Twitter in the late months of last year, there is another news to add fuel to the fire. According to a report by Bloomberg, Samsung phone users, are unable to delete the Facebook app on their smartphones. Apparently, Nick Winke, a photographer in the Pacific Northwest tried to delete the Facebook app from his Samsung Galaxy S8. He soon found out it was undeletable. He found only an option to "disable," and he wasn’t sure what that meant. This is alarming, because if an application is a permanent feature of a user’s device, can it track user’s digital actions? This has also raised concerns about whether Samsung is monetizing hardware outside of margins through data exploitation by partnering with Facebook. After the news broke out, a lot of people have expressed their concerns on Social media platforms. https://twitter.com/riptari/status/1082926077348069377 https://twitter.com/TomResau/status/1083067919746117638 https://twitter.com/PressXtoJason_/status/1082981989966401544 A Twitter user also expressed concerns over buying a Samsung smartphone. https://twitter.com/APirateMonk/status/1083016272680386560 François Chollet, the author of Keras has termed Facebook as “Phillip Morris combined with Lockheed Martin, but bigger.” https://twitter.com/fchollet/status/1083034900020658176 A Facebook spokesperson has told Bloomberg, that the disabled app doesn’t collect data or send information back to Facebook. They have specified that an app being deletable or not depends on various pre-install deals Facebook has made with phone manufacturers, operating systems and mobile operator. However, they denied specifying exactly how many such pre-install deals Facebook has globally. Samsung also told Bloomberg that they have pre-installed Facebook app on “selected models” with options to disable it, specifying that a disabled app is no longer running. ProPublica shares learnings of its Facebook Political Ad Collector project NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release British parliament publishes confidential Facebook documents that underscore the growth at any cost culture at Facebook
Read more
  • 0
  • 0
  • 8037

article-image-homebrew-1-9-0-released-with-periodic-brew-cleanup-beta-support-for-linux-windows-and-much-more
Melisha Dsouza
10 Jan 2019
2 min read
Save for later

Homebrew 1.9.0 released with periodic brew cleanup, beta support for Linux, Windows and much more!

Melisha Dsouza
10 Jan 2019
2 min read
Yesterday, Mike McQuaid, Homebrew’s lead maintainer announced the release of Homebrew 1.9.0. The release has major updates like Linux support, (optional) automatic brew cleanup, providing bottles (binary packages) to more Homebrew users and much more. Homebrew is an open-source software package management system that simplifies the installation of software on Apple's macOS operating system. Homebrew automatically handles all dependencies and installs requested software into one common location thus providing users with easy access and quick updates. Features of Homebrew 1.9.0 Beta support for Linux and Windows 10; with the Windows Subystem for Linux. Linuxbrew (Homebrew on Linux) does not require root access. If the HOMEBREW_INSTALL_CLEANUP environment variable is set, brew cleanup runs periodically on the system. On re-install, install or upgrade; the HOMEBREW_INSTALL_CLEANUP environment variable will also trigger individual formula cleanup on reinstall, install or upgrade. brew prune has been replaced by brew cleanup and is now run as part of brew cleanup. Homebrew 1.9.0 will not run on 32-bit Intel CPUs. Incomplete downloads can now be resumed when the server rejects HEAD requests. This is particularly useful since some HTTP servers apparently don't support HEAD. brew bottle will allow relocation of more bottles. This will be done by ignoring source code and skipping matches to build dependencies. macOS Mojave bottles are optimized for the newer CPUs required by Mojave. ..and much more! What to expect in Homebrew 2.0.0? Official support for Linux, Windows10; with the Windows Subsystem for Linux Homebrew 2.0.0 will stop running on macOS versions 10.8 and below. Homebrew 2.0.0 will stop the migration of old installations from the legacy Homebrew/homebrew repository. While most users are excited about the news, some of them are not satisfied with Homebrew’s documentation. Source: Hacker News You can head over to Homebrews’ official blog to know more about the additional features introduced in Homebrew 1.9.0. Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks? An update on Bcachefs- the “next generation Linux filesystem” The Linux and RISC-V foundations team up to drive open source development and adoption of RISC-V instruction set architecture (ISA)
Read more
  • 0
  • 0
  • 15475
article-image-github-introduces-a-social-network-style-status-feature-to-let-developers-inform-collaborators-of-their-availability
Bhagyashree R
10 Jan 2019
2 min read
Save for later

GitHub introduces a social network style status feature to let developers inform collaborators of their availability

Bhagyashree R
10 Jan 2019
2 min read
Yesterday, Nat Friedman, GitHub’s CEO announced a new feature, which allows developers to share their status to coworkers know about their availability. This feature gives open source maintainers and other developers an option to inform their community that they are away and won’t respond to issues as quickly. https://twitter.com/natfriedman/status/1083044980929716224 GitHub started this year with a bang by announcing updates like unlimited free private repositories (GitHub Free) and a unified enterprise offering (GitHub Enterprise). Previously, it also added a Star button using which you can indicate the topics you are interested in so GitHub can fetch the code and developers that share your interests. You can find this feature on your GitHub profile and choose from options like On vacation, Working from home, Out sick, and Focusing. You can also set it to Busy and GitHub will inform your coworkers if they assign you any task. Source: GitHub Many users reacted positively to this feature update and also suggested further improvements. These are some of the suggestions: https://twitter.com/BillHiggins/status/1083158138654736384 https://twitter.com/eamodio/status/1083056145768632325 https://twitter.com/jlnostr/status/1083045252892635136 Though many developers were excited about this feature, some of them also mentioned their concern: https://twitter.com/timoinwien/status/1083048055849582592 https://twitter.com/simran5590/status/1083045368147767296 GitHub now provides unlimited free private repos and a new GitHub Enterprise GitHub plans to deprecate GitHub services and move to Webhooks in 2019 GitHub was down first working day of 2019, hacker claims DDoS
Read more
  • 0
  • 0
  • 2569

article-image-triggermesh-announces-open-source-knative-lambda-runtime-aws-lambda-functions-can-now-be-deployed-on-knative
Melisha Dsouza
10 Jan 2019
2 min read
Save for later

TriggerMesh announces open source ‘Knative Lambda Runtime’; AWS Lambda functions can now be deployed on Knative!

Melisha Dsouza
10 Jan 2019
2 min read
"We believe that the key to enabling cloud native applications, is to provide true portability and communication across disparate cloud infrastructure." Mark Hinkle, co-founder of TriggerMesh Yesterday, TriggerMesh- the open source multi-cloud service management platform- announced their open source project ‘Knative Lambda Runtime’ (TriggerMesh KLR). KLR will bring AWS Lambda serverless computing to Kubernetes which will enable users to run Lambda functions on Knative-enabled clusters and serverless clouds. Amazon Web Services' (AWS) Lambda for serverless computing can only be used on AWS and not on another cloud platform. TriggerMesh KLR changes the game completely as now, users can avail complete portability of Amazon Lambda functions to Knative native enabled clusters, and Knative enabled serverless cloud infrastructure “without the need to rewrite these serverless functions”. [box type="shadow" align="" class="" width=""]Fun fact: KLR is pronounced as ‘clear’[/box] Features of TriggerMesh Knative Lambda Runtime Knative is a  Google Cloud-led Kubernetes-based platform which can be used to build, deploy, and manage modern serverless workloads. KLR are Knative build templates that can be used to runan AWS Lambda function in a Kubernetes cluster as is in a Knative powered Kubernetes cluster (installed with Knative). KLR enables serverless users to move functions back and forth between their Knative and AWS Lambda. AWS  Lambda Custom Runtime API in combination with the Knative Build system makes deploying KLR possible. Serverless users have shown a positive response to this announcement, with most of them excited for this news. Kelsey Hightower, developer advocate, Google Cloud Platform, calls this news ‘dope’ and we can understand why! His talk at KubeCon+CloudNativeCon 2018 had focussed on serveless and its security aspects. Now that AWS Lambda functions can be run on Google’s Knative, this marks a new milestone for TriggerMesh. https://twitter.com/kelseyhightower/status/1083079344937824256 https://twitter.com/sebgoa/status/1083014086609301504 It would be interesting to see how this moulds the path to a Kubernetes hybrid-cloud model. Head over to TriggerMesh’s official blog for more insights to this news. Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes  
Read more
  • 0
  • 0
  • 5141

article-image-uber-ai-labs-introduce-poetpaired-open-ended-trailblazer-to-generate-complex-and-diverse-learning-environments-and-their-solutions
Savia Lobo
09 Jan 2019
3 min read
Save for later

Uber AI Labs introduce POET(Paired Open-Ended Trailblazer) to generate complex and diverse learning environments and their solutions

Savia Lobo
09 Jan 2019
3 min read
Yesterday, researchers at the Uber AI Labs released the Paired Open-Ended Trailblazer (POET) algorithm that pairs the generation of environmental challenges and the optimization of agents to solve those challenges. The POET algorithm explores many different paths through the space of possible problems and solutions and, critically, allows these stepping-stone solutions to transfer between problems. The algorithms aim towards generating new tasks, optimizing solutions for them, and transferring agents between tasks to enable otherwise unobtainable advances. Researchers have applied POET to create and solve bipedal walking environments. These environments were adapted from the BipedalWalker environments in OpenAI Gym, popularized in a series of blog posts and papers by David Ha. Each environment Ei is paired with a neural network-controlled agent Ai that tries to learn to navigate through that environment. Here’s an image that depicts an example environment and agent: Source: Uber Engineering In this experiment, the POET algorithm aims to achieve two goals, which are: (1) evolve the population of environments towards diversity and complexity (2) optimize agents to solve their paired environments. During a single such run, POET generates a diverse range of complex and challenging environments, as well as their solutions. POET also periodically performs transfer experiments to explore whether an agent optimized in one environment might serve as a stepping stone to better performance in a different environment. There are two types of transfer attempts: Direct transfer: Here, the agents from the originating environment are directly evaluated in the target environment. Proposal transfer: Here, agents take one ES optimization step in the target environment. Source: Uber Engineering By testing transfers to other active environments, POET harnesses the diversity of its multiple agent-environment pairs to its full potential, i.e., without missing any opportunities to gain an advantage from existing stepping stones. Thus researchers mention that POET could invent radical new courses and solutions to them at the same time. It could similarly produce fascinating new kinds of soft robots for unique challenges it invents that only soft robots can solve. POET could also generate simulated test courses for autonomous driving that both expose unique edge cases and demonstrate solutions to them. In their blog, the researchers said that they will release the source code soon and also that “more exotic applications are conceivable, like inventing new proteins or chemical processes that perform novel functions that solve problems in a variety of application areas. Given any problem space with the potential for diverse variations, POET can blaze a trail through it”. Read more about Paired Open-Ended Trailblazer (POET) in detail in its research paper. Here’s a video that demonstrates the working of the POET algorithm: https://youtu.be/D1WWhQY9N4g Canadian court rules out Uber’s arbitration process; calls it “unconscionable” and “invalid” Uber to restart its autonomous vehicle testing, nine months after the fatal Arizona accident Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNativeCon  
Read more
  • 0
  • 0
  • 13924
article-image-eu-shares-guidelines-to-help-organizations-achieve-gdpr-compliance
Natasha Mathur
09 Jan 2019
5 min read
Save for later

ProtonMail shares guidelines to help organizations achieve EU GDPR compliance

Natasha Mathur
09 Jan 2019
5 min read
ProtonMail launched an online resource site yesterday, called "GDPR.eu" that offers complete compliance guide to General data protection regulation (GDPR) law by EU. GDPR is considered the toughest privacy and security law in the world. The law imposes obligations onto organizations that collect user’s personal data across Europe. The regulation includes levying fines of tens of millions of euros against organizations who violate its rules of privacy and security. The GDPR compliance guide offers detailed information about the GDPR law and answers questions regarding “how to write a GDPR-compliant privacy notice”, “how does GDPR affect email”, “what is GDPR data protection office (DPO)”, and so on. Let’s have a look at some of the key topics covered under the GDPR compliance guide. GDPR-compliant privacy notice A GDPR privacy notice refers to a public document from an organization that gives details on how they process a user’s personal data and how they apply GDPR’s data protection principles. The information that needs to be mentioned in the privacy notice varies depending on two factors: a) whether an organization has collected its data directly from an individual or b) whether it's received via the third party. As per the GDPR law, organizations need to provide their users with a privacy notice that is: concise, transparent, intelligible, and is presented in an easily accessible form. written in clear and plain language, especially for information that is addressed specifically to a child. delivered properly and in a timely manner. provided free of charge. The guide also mentions some of the best practices that should be followed when writing a privacy notice. It mentions that phrases such as “we may use your personal data to develop services” or “we may use your personal data for research purposes” should not be used in a public notice as they don’t give a clear picture on how an organization intends to use that data. Instead, using phrases such as “we will retain your shopping history and use details of the products that have previously purchased to make better suggestions to you for other products” is much better and informative. GDPR email compliance The GPR compliance guide provides information on how GDPR affects email. GDPR compliance guide states that GDPR does not put a ban on email marketing by any means, instead it encourages organizations to promote effective email-marketing. “A good marketing email should ideally provide value to the recipient and be something they want to receive anyway. What the GDPR does is clarify the terms of consent, requiring organizations to ask for an affirmative opt-in to be able to send communications. And you must also make it easy for people to change their mind and opt-out”, states the guide. GDPR guide states another aspect of emails i.e. email security.  As per Article 5(f) of GDPR, it is the responsibility of an organization to protect personal data of the users against accidental loss, and destruction or damage, by implementing the appropriate technical or organizational steps. Moreover, the guide also states that in order to avoid any liability, it’s important for organizations to educate their team regarding email safety. For instance, implementing basic steps such as two-factor authentication is a good initiative toward protecting user data and complying with the GDPR. GPDR Data Protection Officer (DPO) GDPR, under certain conditions, states that organizations should appoint a Data Protection Officer that can oversee an organization’s GDPR compliance. The Data Protection Officer (DPO) should possess expert knowledge when it comes to data protection law and practices. Article 38 in GDPR states that no other employees within an organization can issue any instructions to the DPO when it comes to the performance of their tasks. DPOs have wide-ranging responsibilities and the position is protected from any potential interference from other employees within an organization. Also, DPO only reports to the highest level of management at the organization. GDPR does not list specific qualifications for DPO. However, it does mention that the level of knowledge and experience required for appointing an organization’s DPO should be determined based on the complexity of the data processing operations. The GDPR compliance guide mentions three criteria that need to be met by an organization for it to appoint a DPO: Public authority: the processing of personal data gets handled by a public body or public authorities within an organization. Large scale and regular monitoring: the processing of personal user data is the main activity of an organization who regularly and systematically observes user data on a large scale. Large-scale special data categories: the processing of specific “special” data is carried out on a large scale within these organizations. Apart from these major guidelines, GDPR compliance guide also offers an overview of GDPR, GDPR compliance checklist, GDPR forms, and templates, along with the latest news and updates regarding GDPR. Check out the complete GDPR compliance guide here. EU to sponsor bug bounty programs for 14 open source projects from January 2019 Twitter on the GDPR radar for refusing to provide a user his data due to ‘disproportionate effort’ involved Tim Cook talks about privacy, supports GDPR for USA at ICDPPC, ex-FB security chief calls him out
Read more
  • 0
  • 0
  • 13421

article-image-postgresql-wins-dbms-of-the-year-2018-beating-mongodb-and-redis-in-db-engines-ranking
Amrata Joshi
09 Jan 2019
4 min read
Save for later

PostgreSQL wins ‘DBMS of the year’ 2018 beating MongoDB and Redis in DB-Engines Ranking

Amrata Joshi
09 Jan 2019
4 min read
Last week, DB Engines announced PostgreSQL as the Database Management System (DBMS) of the year 2018, as it gained more popularity in the DB-Engines Ranking last year than any of the other 343 monitored systems. Jonathan S. Katz, PostgreSQL contributor, said, "The PostgreSQL community cannot succeed without the support of our users and our contributors who work tirelessly to build a better database system. We're thrilled by the recognition and will continue to build a database that is both a pleasure to work with and remains free and open source." PostgreSQL, which will turn 30 this year has won the DBMS title for the second time in a row. It has established itself as the preferred data store amongst developers and has been appreciated for its stability and feature set. In the DBMS market, various systems use PostgreSQL as their base technology, this itself justifies that how well-established PostgreSQL is. Simon Riggs, Major PostgreSQL contributor, said, "For the second year in a row, the PostgreSQL team thanks our users for making PostgreSQL the DBMS of the Year, as identified by DB-Engines. PostgreSQL's advanced features cater to a broad range of use cases all within the same DBMS. Rather than going for edge case solutions, developers are increasingly realizing the true potential of PostgreSQL and are relying on the absolute reliability of our hyperconverged database to simplify their production deployments." How the DB-Engines Ranking scores are calculated For determining the DBMS of the year, the team at DB Engines subtracted the popularity scores of January 2018 from the latest scores of January 2019. The team used a difference of these numbers instead of percentage because that would favor systems with tiny popularity at the beginning of the year. The popularity of a system is calculated by using the parameters, such as the number of mentions of the system on websites, the number of mentions in the results of search engine queries. The team at DB Engines uses Google, Bing, and Yandex for this measurement. In order to count only relevant results, the team searches for <system name> together with the term database, e.g. "Oracle" and "database".The next measure is known as General interest in the system, for which the team uses the frequency of searches in Google Trends. The number of related questions and the number of interested users on the well-known IT-related Q&A site such as Stack Overflow and DBA Stack Exchange are also checked in this process. For calculating the ranking, the team also uses the number of offers on the leading job search engines Indeed and Simply Hired. A number of profiles in professional networks such as LinkedIn and Upwork in which the system is mentioned is also taken into consideration. The number of tweets in which the system is mentioned is also counted. The calculated result is a list of DBMSs sorted by how much they managed to increase their popularity in 2018. 1st runner-up: MongoDB For 2018, MongoDB is the first runner-up and has previously won the DBMS of the year in 2013 and 2014. Its growth in popularity has even accelerated ever since, as it is the most popular NoSQL system. MongoDB keeps on adding functionalities that were previously outside the NoSQL scope. Lat year, MongoDB also added ACID support, which got a lot of developers convinced, to rely on it with critical data. With the improved support for analytics workloads, MongoDB is a great choice for a larger range of applications. 2nd runner-up: Redis Redis, the most popular key-value store got the third place for DBMS of the year 2018. It has been in the top three DBMS of the year for 2014. It is best known as high-performance and feature-rich key-value store. Redis provides a loadable modules system, which means third parties can extend the functionality of Redis. These modules offer a graph database, full-text search, and time-series features, JSON data type support and much more. PipelineDB 1.0.0, the high performance time-series aggregation for PostgreSQL, released! Devart releases standard edition of dbForge Studio for PostgreSQL MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code
Read more
  • 0
  • 0
  • 17213
Modal Close icon
Modal Close icon