Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-baidu-releases-ezdl-a-platform-that-lets-you-build-ai-and-machine-learning-models-without-any-coding-knowledge
Melisha Dsouza
03 Sep 2018
3 min read
Save for later

Baidu releases EZDL - a platform that lets you build AI and machine learning models without any coding knowledge

Melisha Dsouza
03 Sep 2018
3 min read
Chinese internet giant Baidu released ‘EZDL’ on September 1. EZDL allows businesses to create and deploy AI and machine learning models without any prior coding skills. With a simple drag-and-drop interface, it takes only four steps to train a deep learning model that’s built specifically for a business’ needs. This is particularly good news for small and medium sized businesses for whom leveraging artificial intelligence might ordinarily prove challenging. Youping Yu, general manager of Baidu’s AI ecosystem division, claims that EZDL will allow everyone to access AI “in the most convenient and equitable way”. How does EZDL work? EZDL focuses on three important aspects of machine learning: image classification, sound classification, and object detection. One of the most notable features about EZDL is the small size of the training data sets required to create artificial intelligence models. For image classification and object recognition, it requires just 20 to 100 images per label. For sound classification, it needs only 50 audio files at the most. The training can be completed in just 15 minutes in some cases, or a maximum of one hour for more complex models. After a model has been trained, the algorithm can be downloaded as a SDK or uploaded into a public or private cloud platform. The algorithms created support a range of operating systems, including Android and iOS. Baidu also claims an accuracy of more than 90 percent in two-thirds of the models it creates. How EZDL is already being used by businesses Baidu has demonstrated many use cases for EZDL. For example: A home decorating website called ‘Idcool’ uses EZDL to train systems that automatically identify the design and style of a room with 90 percent accuracy. An unnamed medical institution is using EZDL to develop a detection model for blood testing. A security monitoring firm used it to make a sound-detecting algorithm that can recognize “abnormal” audio patterns that might signal a break-in. Baidu is clearly making its mark in the AI race. This latest release follows the launch of its Baidu Brain platform for enterprises two years ago. Baidu Brain is already used by more than 600,000 developers. Another AI service launched by the company is its conversational DuerOS digital assistant, which is installed on more than 100 million devices. As if all that weren't enough, Baidu has also been developing hardware for artificial intelligence systems in the form of its Kunlun chip, designed for edge computing and data center processing - it’s slated for launch later this year. Baidu will demo EZDL at TechCrunch Disrupt SF, September 5th to 7th at Moscone West, 800 Howard St., San Francisco. For more on EZDL visit the Baidu's website for the project. Read next Baidu Apollo autonomous driving vehicles gets machine learning based auto-calibration system Baidu announces ClariNet, a neural network for text-to-speech synthesis
Read more
  • 0
  • 0
  • 25906

article-image-media-manipulation-by-deepfakes-and-cheap-fakes-require-both-ai-and-social-fixes-finds-a-data-society-report
Sugandha Lahoti
19 Sep 2019
3 min read
Save for later

Media manipulation by Deepfakes and cheap fakes require both AI and social fixes, finds a Data & Society report

Sugandha Lahoti
19 Sep 2019
3 min read
A new report from Data and Society published by researchers Britt Paris and Joan Donovan argues that the violence of Audio Visual manipulation - namely Deepfakes and Cheap fakes can not be addressed by artificial intelligence alone. It requires a combination of technical and social solutions. What are Deepfakes and cheap fakes One form of Audio Visual manipulation can be executed using experimental machine learning which is deepfakes. Most recently, a terrifyingly realistic Deepfake video of Bill Hader transforming into Tom Cruise went viral on YouTube. Facebook creator Mark Zuckerberg also became the target of the world’s first high profile white hat deepfake operation. This video was created by artists Bill Posters and Daniel Howe in partnership with advertising company Canny where Zuckerberg appears to give a threatening speech about the power of Facebook. Read Also Now there is a Deepfake that can animate your face with just your voice and a picture. Worried about Deepfakes? Check out the new algorithm that manipulates talking-head videos by altering the transcripts. However, fake videos can also be rendered through Photoshop, lookalikes, re-contextualizing footage, speeding, or slowing. This form of AV manipulation – are cheap fakes. The researchers have coined the term stating they rely on cheap, accessible software, or no software at all. Deepfakes can’t be fixed with Artificial Intelligence alone The researchers argue that deepfakes, while new, are part of a long history of media manipulation — one that requires both a social and a technical fix. They determine that deepfakes need to address structural inequality; groups most vulnerable to that violence should be able to influence public media systems. The authors say, “Those without the power to negotiate truth–including people of color, women, and the LGBTQA+ community–will be left vulnerable to increased harms.” Researchers worry that AI-driven content filters and other technical fixes could cause real harm. “They make things better for some but could make things worse for others. Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life.” “It’s a massive project, but we need to find solutions that are social as well as political so people without power aren’t left out of the equation.” This technical fix, the researchers say, must work alongside the legal system to prosecute bad actors and stop the spread of faked videos. “We need to talk about mitigation and limiting harm, not solving this issue, Deepfakes aren’t going to disappear.” The report states, “There should be “social” policy solutions that penalize individuals for harmful behavior. More encompassing solutions should also be formed to enact federal measures on corporations to encourage them to more meaningfully address the fallout from their massive gains.” It concludes, “Limiting the harm of AV manipulation will require an understanding of the history of evidence, and the social processes that produce truth, in order to avoid new consolidations of power for those who can claim exclusive expertise.” Other interesting news in tech $100 million ‘Grant for the Web’ to promote innovation in web monetization jointly launched by Mozilla, Coil and Creative Commons The House Judiciary Antitrust Subcommittee asks Amazon, Facebook, Alphabet, and Apple for details including private emails in the wake of antitrust investigations UK’s NCSC report reveals significant ransomware, phishing, and supply chain threats to businesses
Read more
  • 0
  • 0
  • 25807

article-image-react-storybook-ui-logging-user-interactions-with-actions-add-on-tutorial
Packt Editorial Staff
17 Jul 2018
7 min read
Save for later

React Storybook UI: Logging user interactions with Actions add-on [Tutorial]

Packt Editorial Staff
17 Jul 2018
7 min read
Sometimes, you end up creating a whole new page, or a whole new app, just to see what your component can do on its own. This can be a painful process and, which is why Storybook exists in React. With Storybook, you're automating a sandboxed environment to work with. It also handles all the build steps, so you can write a story for your components and see the result. In this article we are going to use the Storybook add-ons, which you can test on any aspect of your component before worrying about integrating it in your application. To be specific we are going to look at Actions, which is a by default add-on in Storybook. This React tutorial is an extract from the book React 16 Tools written by Adam Boduch. Adam Boduch has been involved with large-scale JavaScript development for nearly 10 years. He has practical experience with real-world software systems, and the scaling challenges they pose. Working with Actions in React Storybook The Actions add-on is enabled in your Storybook by default. The idea with Actions is that once you select a story, you can interact with the rendered page elements in the main pane. Actions provide you with a mechanism that logs user interactions in the Storybook UI. Additionally, Actions can serve as a general- purpose tool to help you monitor data as it flows through your components. Let's start with a simple button component: import React from 'react'; const MyButton = ({ onClick }) => ( <button onClick={onClick}>My Button</button> ); export default MyButton; The MyButton component renders a button element and assigns it an onClick event handler. The handler is actually defined by MyComponent; it's passed in as a prop. So let's create a story for this component and pass it an onClick handler function: import React from 'react'; import { storiesOf } from '@storybook/react'; import { action } from '@storybook/addon-actions'; import MyButton from '../MyButton'; storiesOf('MyButton', module).add('clicks', () => ( <MyButton onClick={action('my component clicked')} /> )); Do you see the action() function that's imported from @storybook/addon-actions? This is a higher-order function—a function that returns another function. When you call action('my component clicked'), you're getting a new function in return. The new function behaves kind of like console.log(), in that you can assign it a label and log arbitrary values. The difference is that functions created by the Storybook action() add-on function is that the output is rendered right in the actions pane of the Storybook UI: As usual, the button element is rendered in the main pane. The content that you're seeing in the actions pane is the result of clicking on the button three times. The output is the exact same with every click, so the output is all grouped under the my component clicked label that you assigned to the handler function. In the preceding example, the event handler functions that action() creates are useful for as a substitute for actual event handler functions that you would pass to your components. Other times, you actually need the event handling behavior to run. For example, you have a controlled form field that maintains its own state and you want to see what happens as the state changes. For cases like these, I find the simplest and most effective approach is to add event handler props, even if you're not using them for anything else. Let's take a look at an example of this: import React, { Component } from 'react'; class MyRangeInput extends Component { static defaultProps = { onChange() {}, onRender() {} }; state = { value: 25 }; onChange = ({ target: { value } }) => { this.setState({ value }); this.props.onChange(value); }; render() { const { value } = this.state; this.props.onRender(value); return ( <input type="range" min="1" max="100" value={value} onChange={this.onChange} /> ); } } export default MyRangeInput; Let's start by taking a look at the defaultProps of this component. By default, this component has two default handler functions for onChange and onRender—these do nothing so that if they're not set, they can still be called and nothing will happen. As you might have guessed, we can now pass action() handlers to MyRangeInput components. Let's try this out. Here's what your stories/index.js looks like now: import React from 'react'; import { storiesOf } from '@storybook/react'; import { action } from '@storybook/addon-actions'; import MyButton from '../MyButton'; import MyRangeInput from '../MyRangeInput'; storiesOf('MyButton', module).add('clicks', () => ( <MyButton onClick={action('my component clicked')} /> )); storiesOf('MyRangeInput', module).add('slides', () => ( <MyRangeInput onChange={action('range input changed')} onRender={action('range input rendered')} /> )); Now when you view this story in the Storybook UI, you should see lots of actions logged when you slide the range input slider: As the slider handle moves, you can see the two event handler functions that you've passed to the component are logging the value at different stages of the component rendering life cycle. The most recent action is logged at the top of the pane, unlike browser dev tools which logs the most recent value at the bottom. Let's revisit the MyRangeInput code for a moment. The first function that's called when the slider handle moves is the change handler: onChange = ({ target: { value } }) => { this.setState({ value }); this.props.onChange(value); }; This onChange() method is internal to MyRangeInput. It's needed because the input element that it renders uses the component state as the single source of truth. These are called controlled components in React terminology. First, it sets the state of the value using the target.value property from the event argument. Then, it calls this.props.onChange(), passing it the same value. This is how you can see the even value in the Storybook UI. Note that this isn't the right place to log the updated state of the component. When you call setState(), you have to make the assumption that you're done dealing with state in the function because it doesn't always update synchronously. Calling setState() only schedules the state update and the subsequent re-render of your component. Here's an example of how this can cause problems. Let's say that instead of logging the value from the event argument, you logged the value state after setting it: There's a bit of a problem here now. The onChange handler is logging the old state while the onRender handler is logging the updated state. This sort of logging output is super confusing if you're trying to trace an event value to rendered output—things don't line up! Never log state values after calling setState(). If the idea of calling noop functions makes you feel uncomfortable, then maybe this approach to displaying actions in Storybook isn't for you. On the other hand, you might find that having a utility to log essentially anything at any point in the life cycle of your component without the need to write a bunch of debugging code inside your component. For such cases, Actions are the way to go. To summarize, we learned about Storybook add-on Actions. We saw it help with logging and the links provide a mechanism for navigation beyond the default. Grab the book React 16 Tooling today. This book covers the most important tools, utilities, and libraries that every React developer needs to know — in detail. What is React.js and how does it work? Is React Native is really Native framework? React Native announces re-architecture of the framework for better performance  
Read more
  • 0
  • 0
  • 25699

article-image-dr-brandon-explains-decision-trees-jon
Aarthi Kumaraswamy
08 Nov 2017
3 min read
Save for later

Dr.Brandon explains Decision Trees to Jon

Aarthi Kumaraswamy
08 Nov 2017
3 min read
[box type="shadow" align="" class="" width=""]Dr. Brandon: Hello and welcome to the third episode of 'Date with Data Science'. Today we talk about decision trees in machine learning. Jon: Decisions are hard enough to make. Now you want me to grow a decision tree. Next, you'll say there are decision jungles too! Dr. Brandon: It might come as a surprise to you, Jon, but decision trees can help you make decisions easier. Imagine you are in a restaurant and you are given a menu card. A decision tree can help you decide if you want to have a burger, pizza, fries or a pie, for instance. And yes, there are decision jungles, but they are called random forests. We will talk about them another time. Jon: You know Bran, I have never been very good at making decisions. But with food, it is easy. It's ALWAYS all you can have. Dr. Brandon: Well, my mistake. Let's take another example. You go to the doctor's after your binge eating at the restaurant with stomach complaints. A decision tree can help your doctor decide if you have a problem and then to choose a treatment option based on what your symptoms are. Jon: Really!? Tell me more. Dr. Brandon: Alright. The following excerpt introduces decision trees from the book Apache Spark 2.x Machine Learning Cookbook by Siamak Amirghodsi, Meenakshi Rajendran, Broderick Hall, and Shuen Mei. To know how to implement them in Spark read this article. [/box] Decision trees are one of the oldest and more widely used methods of machine learning in commerce. What makes them popular is not only their ability to deal with more complex partitioning and segmentation (they are more flexible than linear models) but also their ability to explain how we arrived at a solution and as to "why" the outcome is predicated or classified as a class/label. A quick way to think about the decision tree algorithm is as a smart partitioning algorithm that tries to minimize a loss function (for example, L2 or least square) as it partitions the ranges to come up with a segmented space which are best-fitted decision boundaries to the data. The algorithm gets more sophisticated through the application of sampling the data and trying a combination of features to assemble a more complex ensemble model in which each learner (partial sample or feature combination) gets to vote toward the final outcome. The following figure depicts a simplified version in which a simple binary tree (stumping) is trained to classify the data into segments belonging to two different colors (for example, healthy patient/sick patient). The figure depicts a simple algorithm that just breaks the x/y feature space to one-half every time it establishes a decision boundary (hence classifying) while minimizing the number of errors (for example, a L2 least square measure): The following figure provides a corresponding tree so we can visualize the algorithm (in this case, a simple divide and conquer) against the proposed segmentation space. What makes decision tree algorithms popular is their ability to show their classification result in a language that can easily be communicated to a business user without much math: If you liked the above excerpt, please be sure to check out Apache Spark 2.0 Machine Learning Cookbook it is originally from to learn how to implement deep learning using Spark and many more useful techniques on implementing machine learning solutions with the MLlib library in Apache Spark 2.0.
Read more
  • 0
  • 0
  • 25692

article-image-deploying-node-js-apps-on-google-app-engine-is-now-easy
Kunal Chaudhari
22 Jun 2018
3 min read
Save for later

Deploying Node.js apps on Google App Engine is now easy

Kunal Chaudhari
22 Jun 2018
3 min read
Starting from this month Google App Engine will allow web developers to deploy Node.js web applications to its standard environment. The App Engine standard environment is nothing but container instances running on Google's infrastructure. These containers previously supported runtimes in Java 7, Java 8, Python 2.7, Go and PHP. Node.js 8 is the new addition to this long list of environments. Developers who always wanted a ready and quick platform to build web applications on Cloud scale with a very low cost to start or wanted to get rid of the burden of managing and provisioning infrastructure have found that Google App Engine is a very good choice. It has been a developer’s favorite due to its zero-config deployments, zero server management, and auto-scaling capabilities. This move from Google brings in numerous advantages such as fast deployments and automatic scaling, better developing experience, and reliable security features. Fast Deployment and Automatic scaling The app Engine standard environment is known for it’s shorter deployment time. A basic Express.js application can be deployed under a minute with the standard environment. Not only that but App Engine allows the apps to automatically scale based on the incoming traffic to that application. For example, App Engine automatically scales to zero when there is no request made for that particular application. This allows developers to implement cost-effective measures while developing or deploying their applications. Enhanced Developer Experience Google has always been striving to provide a smoother developer experience with all its products. That’s also true for this new improvement to the App Engine. The new Node.js runtime comes with no language or API restrictions. This allows developers to choose npm modules of their choice. Along with this, App Engine also provides application logs and key performance indicators in Stackdriver, which takes care of Monitoring, logging, and diagnostics for applications on the Google Cloud Platform. Reliable Security: Updating the operating system or Node.js for any major or minor versions is a tedious task. App Engine takes care of all this by automatically handling all the updates required for your application to work smoothly with all the latest features. Not only that but App Engine’s automated one-click certificate generation allows developers to serve their application under a secure HTTPS URL with their own custom domain. The relationship between Node.js and Google goes a long way beyond GCP as Node.js runs on V8, Google's open source high-performance JavaScript engine. This recent collaboration between Node.js and Google also comes with better crafted node.js libraries that allow developers to use GCP products within their node.js applications. To try out all these new features on the App Engine you can visit their official website. Building chat application with Kotlin using Node.js, the powerful Server-side JavaScript platform Node 10.0.0 released, packed with exciting new features How to deploy a Node.js application to the web using Heroku
Read more
  • 0
  • 0
  • 25682

article-image-opencv-4-schedule-july-release
Pavan Ramchandani
10 Apr 2018
3 min read
Save for later

OpenCV 4.0 is on schedule for July release

Pavan Ramchandani
10 Apr 2018
3 min read
There has been some exciting news from OpenCV: OpenCV developer Vadim Pisarevsky announced the development on OpenCV 4 on the GitHub repository of OpenCV and addressed why the time is right for the release of OpenCV 4. OpenCV 3 was released in 2015 taking 6 years to come out after OpenCV 2 which was released in 2009. OpenCV 3 has been built around C++ 98 standards. Re-writing the library in the recent version of C++ like C++ 11 or later versions would mean to break the "binary compatibility". This makes it important to move further from the OpenCV 3 promises. There are two interesting concepts that we need to know here - Binary compatibility and source-level compatibility. OpenCV had made a promise to stay binary-compatible with versions, that means the release of new OpenCV versions will stay compatible with the previous version library calls. Now moving from C++ 98 standard to recent C++ standard will break this promise. However, OpenCV has looked into this and found that not much harm will be caused by this migration, hence relaxing the "binary compatibility" and moving to "source compatibility" with the new release. Apart from migrating to latest C++ standards, the OpenCV library needs refactoring and new module additions for Deep learning and neural networks seeing the heavy usage of OpenCV in machine learning. OpenCV developers can expect some big revisions in functions and modules. Here is a quick summary of what you might expect in this major release of OpenCV 4.0: Hardware-accelerated Video I/O module: This module maximizes OpenCV performance using software and hardware accelerator in the machine. This means calling this module with OpenCV 4 will harness the acceleration. HighGUI module (Revised): With the enhancement of this module, you can efficiently read video from camera or files and also perform a write operation on them. This module comes with a lot of functionality for media IO operation. Graph API module: This module creates support for efficiently reading and writing graphs from the image. Point Cloud module: Point cloud module contains algorithms such as feature estimation, model fitting, and segmentation. These algorithms can be used for filtering noisy data, stitch 3D point clouds, segment part of the image, among others. Tracking, Calibration, and Stereo Modules, among other features that will benefit image processing with OpenCV. You can find the full list of a new module that might get added in OpenCV 4 in the issues page of OpenCV repo. The OpenCV community is relying on its huge developer community to facilitate closing the open issues within the speculated time of release, that is July 2018. Functionalities that don’t make it OpenCV 4 release, will be rolled into the OpenCV 4.x releases. While you wait for OpenCV 4, enjoy these OpenCV 3 tutorials: New functionality in OpenCV 3.0 Fingerprint detection using OpenCV 3 OpenCV Primer: What can you do with Computer Vision and how to get started? Image filtering techniques in OpenCV Building a classification system with logistic regression in OpenCV Exploring Structure from Motion Using OpenCV
Read more
  • 0
  • 0
  • 25643
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-react-newsletter-231-from-ui-devs-rss-feed
Matthew Emerick
22 Sep 2020
2 min read
Save for later

React Newsletter #231 from ui.dev's RSS Feed

Matthew Emerick
22 Sep 2020
2 min read
Articles Guidelines to improve your React folder structure In this article, Max Rosen starts of by showing you his typical folder structure, then teaches you his guiding principles to create the folder structure that “feels right,” at least for his purposes. How to make your react-native project work for the web React-native-web should (in theory) allow you to build on both mobile and web platforms — all within a single codebase. In this article, Clint walks through building a very basic app and the tips and tricks he learned along the way to help it perform at high level without being too buggy. Why Next.js my ultimate choice over Gatsby, Gridsome, and Nuxt? In this article, Ondrej walks through his process for evaluating each of these technologies and why he ultimately chose to go with Next.js. Tutorials Building complex animations with React and Framer Motion This tutorial walks you through how to create smooth, advanced animations in React with clean and minimal declarative code. Building React Apps With Storybook In this in-depth tutorial, you’ll learn how to build and test react components in isolation using Storybook. You will also learn how to use the knobs add-on to modify data directly from the storybook explorer. Sponsor React developers are in demand on Vettery Vettery is an online hiring marketplace that’s changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today. Get started today. Projects SurveyJS A JavaScript survey and form library that also features a React version. react-range Range input with a slider. It’s accessible and lets you bring your own styles and markup. react-xr React components and hooks for creating VR/AR applications with react-three-fiber. Videos How To Build CodePen With React In this 30-minute video, you’ll learn how to create a simple clone of the core functionality of CodePen using React. Building a Netflix clone with React, Styled Components, and Firebase This 10-hour video (😱) walks you through all the steps of building a React clone from scratch.
Read more
  • 0
  • 0
  • 25622

article-image-all-about-browser-fingerprinting-the-privacy-nightmare-that-keeps-web-developers-awake-at-night
Bhagyashree R
08 May 2019
4 min read
Save for later

All about Browser Fingerprinting, the privacy nightmare that keeps web developers awake at night

Bhagyashree R
08 May 2019
4 min read
Last week, researchers published a paper titled Browser Fingerprinting: A survey, that gives a detailed insight into what browser fingerprinting is and how it is being used in the research field and the industry. The paper further discusses the current state of browser fingerprinting and the challenges surrounding it. What is browser fingerprinting? Browser fingerprinting refers to the technique of collecting various device-specific information through a web browser to build a device fingerprint for better identification. The device-specific information may include details like your operating system, active plugins, timezone, language, screen resolution, and various other active settings. This information can be collected through a simple script running inside a browser. A server can also collect a wide variety of information from public interfaces and HTTP headers. This is a completely stateless technique as it does not require storing any collected information inside the browser. The following table shows an example of a browser fingerprint: Source: arXiv.org The history of browser fingerprinting Back in 2009, Jonathan Mayer, who works as an Assistant Professor in the Computer Science Department at Princeton University, investigated if the differences in browsing environments can be exploited to deanonymize users. In his experiment, he collected the content of the navigator, screen, navigator.plugins, and navigator.mimeTypes objects of browsers. The results drawn from his experiment showed that from a total of 1328 clients, 1278 (96.23%) could be uniquely identified. Following this experiment, in 2010, Peter Eckersley from the Electronic Frontier Foundation (EFF) performed the Panopticlick experiment in which he investigated the real-world effectiveness of browser fingerprinting. For this experiment, he collected 470,161 fingerprints in the span of two weeks. This huge amount of data was collected from HTTP headers, JavaScript, and plugins like Flash or Java. He concluded that browser fingerprinting can be used to uniquely identify 83.6% of the device fingerprints he collected. This percentage shot up to 94.2% if users had enabled Flash or Java as these plugins provided additional device information. This is the study that proved that individuals can really be identified through these details and the term “browser fingerprinting was coined”. Applications of Browser fingerprinting As is the case with any technology, browser fingerprinting can be used for both negative and positive applications. By collecting the browser fingerprints, one can track users without their consent or attack their device by identifying a vulnerability. Since these tracking scripts are silent and executed in the background users will have no clue that they are being tracked. Talking about the positive applications, with browser fingerprinting, users can be warned beforehand if their device is out of date by recommending specific updates. This technique can be used to fight against online fraud by verifying the actual content of a fingerprint. “As there are many dependencies between collected attributes, it is possible to check if a fingerprint has been tampered with or if it matches the device it is supposedly belonging to,” reads the paper. It can also be used for web authentication by verifying if the device is genuine or not. Preventing unwanted tracking by Browser fingerprinting By modifying the content of fingerprints: To prevent third-parties from identifying individuals through fingerprints, we can send random or pre-defined values instead of the real ones. As third-parties rely on fingerprint stability to link fingerprints to a single device, these unstable fingerprints will make it difficult for them to identify devices on the web. Switching browsers: A device fingerprint is mainly composed of browser-specific information. So, users can use two different browsers, which will result in two different device fingerprints. This will make it difficult for a third-party to track the browsing pattern of a user. Presenting the same fingerprint for all users: If all the devices on the web present the same fingerprint, there will no advantage of tracking the devices. This is the approach that the Tor Browser uses, which is known as the Tor Browser Bundle (TBB). Reducing the surface of browser APIs: Another defense mechanism is decreasing the surface of browser APIs and reducing the quantity of information a tracking script can collect. This can be done by disabling plugins so that there are no additional fingerprinting vectors like Flash or Silverlight to leak extra device information. Read the full paper, to know more in detail. DuckDuckGo proposes “Do-Not-Track Act of 2019” to require sites to respect DNT browser setting Mozilla Firefox will soon support ‘letterboxing’, an anti-fingerprinting technique of the Tor Browser Mozilla engineer shares the implications of rewriting browser internals in Rust
Read more
  • 0
  • 0
  • 25604

article-image-epic-games-grants-blender-1-2-million-in-cash-to-improve-the-quality-of-their-software-development-projects
Vincy Davis
16 Jul 2019
4 min read
Save for later

Epic Games grants Blender $1.2 million in cash to improve the quality of their software development projects

Vincy Davis
16 Jul 2019
4 min read
Yesterday, Epic Games announced that it is awarding Blender Foundation $1.2 million in cash spanning three years, to accelerate the quality of their software development projects. Blender is a free and open-source 3D creation suite which supports a full range of tools to empower artists to create 3D graphics, animation, special effects or games. Ton Roosendaal, founder, and chairman of Blender Foundation thanked Epic Games in a statement. He said “Thanks to the grant we will make a significant investment in our project organization to improve on-boarding, coordination and best practices for code quality. As a result, we expect more contributors from the industry to join our projects.” https://twitter.com/tonroosendaal/status/1150793424536313862 The $1.2 million grant from Epic is part of their $100 million MegaGrants program which was announced this year in March. Tim Sweeney, CEO of Epic Games had announced that Epic will be offering $100 million in grants to game developers to boost the growth of the gaming industry by supporting enterprise professionals, media and entertainment creators, students, educators, and tool developers doing excellent work with Unreal Engine or enhancing open-source capabilities for the 3D graphics community. Sweeney believes that open tools, libraries, and platforms are critical to the future of the digital content ecosystem. “Blender is an enduring resource within the artistic community, and we aim to ensure its advancement to the benefit of all creators”, he adds. This is the biggest award announced by Epic so far. Blender has no obligation to use or promote Epic Games’ storefront or engine considering this is a pure generous offer by Epic Games with “no strings attached”. In April, Magic Leap revealed that the company will provide 500 Magic Leap One Creator Edition spatial computing devices for giveaway as part of Epic MegaGrants program. Blender users are appreciative of the support and generosity of Epic Games. https://twitter.com/JeannotLandry/status/1150812155412963328 https://twitter.com/DomAnt2/status/1150798726379839488 A Redditor comments, “There's a reason Epic as a company has an extremely positive reputation with people in the industry. They've been doing this kind of thing for years, and a huge amount of money they're making from Fortnite is planned to be turned into grants as well.  Say what you want about them, they are without question the top company in gaming when it comes to actually using their profits to immediately reinvest/donate to the gaming industry itself. It doesn't hurt that every company who works with them consistently says that they're possibly the very best company in gaming to work with.” A comment on Hacker News read, “Epic are doing a great job improving fairness in the gaming industry, and the economic conditions for developers. I'm looking forward to their Epic Store opening up to more (high quality) Indie games.” In 2015, Epic had launched Unreal Dev Grants offering a pool of $5 million to independent developers with interesting projects in Unreal Engine 4 to fund the development of their projects. In December 2018, Epic had also launched an Epic game store where developers will get 88% of the earned revenue. The large sum donation of Epic to Blender holds more value considering the highly anticipated release of Blender 2.8 is around the corner. Though its release candidate is already out, users are quite excited for its stable release. Blender 2.8 will have new 3D viewport and UV editor tools to enhance users gaming experience. With Blender aiming to increase its quality of projects, such grants from major game publishers will only help them get bigger. https://twitter.com/ddiakopoulos/status/1150826388229726209 A user on Hacker News comments, “Awesome. Blender is on the cusp of releasing a major UI overhaul (2.8) that will make it more accessible to newcomers (left-click is now the default!). I'm excited to see it getting some major support from the gaming industry as well as the film industry.” What to expect in Unreal Engine 4.23? Epic releases Unreal Engine 4.22, focuses on adding “photorealism in real-time environments” Blender celebrates its 25th birthday!
Read more
  • 0
  • 0
  • 25523

article-image-what-to-expect-in-asp-net-core-3-0
Prasad Ramesh
30 Oct 2018
2 min read
Save for later

What to expect in ASP.NET Core 3.0

Prasad Ramesh
30 Oct 2018
2 min read
ASP.NET Core 3.0 will come with some changes in the way projects work with frameworks. The .NET Core integration will be tighter and will bring third-party open source integration. Changes to shared frameworks in ASP.NET Core 3.0 In ASP.NET Core 1.0, packages were referenced as just packages. From ASP.NET Core 2.1 this was available as a .NET Core shared framework. ASP.NET Core 3.0 aims to reduce issues working with a shared framework. This change removes some of the Json.NET (Newtonsoft.Json) and Entity Framework Core (Microsoft.EntityFrameworkCore.*) components from the shared framework ASP.NET Core 3.0. For areas in ASP.NET Core dependent on Json.NET, there will be packages that support the integration. The default areas will be updated to use in-box JSON APIs. Also, Entity Framework Core will be shipped as “pure” NuGet packages. Shift to .NET Core from .NET Framework The .NET Framework will get fewer new features that come to .NET Core in further releases. This change is made so that existing applications in .NET Core don’t break due to some changes. To leverage the features from .NET Core, ASP.NET Core will now only run on .NET Core starting from version 3.0. Developers currently using ASP.NET Core on .NET Framework can continue to do so till the LTS support period of August 21, 2021. Third party components will be filtered Third party components will be removed. But Microsoft will support the open source community with integration APIs, contributions to existing libraries by Microsoft engineers, and project templates to ensure smooth integration of these components. Work is also being done on streamlining the experience for building HTTP APIs, and a new API client generation system. For more details, visit the Microsoft website. .NET Core 3.0 and .NET Framework 4.8 more details announced .NET Core 2.0 reaches end of life, no longer supported by Microsoft Microsoft’s .NET Core 2.1 now powers Bing.com
Read more
  • 0
  • 0
  • 25522
article-image-introducing-remove-bg-a-deep-learning-based-tool-that-automatically-removes-the-background-of-any-person-based-image-within-5-seconds
Amrata Joshi
18 Dec 2018
3 min read
Save for later

Introducing remove.bg, a deep learning based tool that automatically removes the background of any person based image within 5 seconds

Amrata Joshi
18 Dec 2018
3 min read
Yesterday, Benjamin Groessing, a web consultant and developer at byteq, released remove.bg, a tool built on python, ruby and deep learning. This tool automatically removes the background of any image within 5 seconds. It uses various custom algorithms for the processing of the image. https://twitter.com/hammer_flo_/status/1074914463726350336 It is a free service and users don’t have to manually select the background/foreground layers to separate them. One can simply select an image and instantly download the resulting image with the background removed. Features of remove.bg Personal and professional use Remove.bg can be used by graphic designer, photographer or selfie lover for removing backgrounds. Saves time and money It saves time as it is automated and it is free of cost. 100% Automatic Apart from the image file, this release doesn’t require inputs such as selecting pixels, marking persons, etc. How does remove.bg work? https://twitter.com/begroe/status/1074645152487129088 Remove.bg uses AI technology for detecting foreground layers and separating them from the background. It uses additional algorithms for improving fine details and preventing color contamination. The AI detects persons as foreground and everything else as background. So, it only works if there is at least one person in the image. Users can upload images of any resolution but for performance reasons, the output image has been limited to 500 × 500 pixels. Privacy in remove.bg User images are uploaded through a secure SSL/TLS-encrypted connection. These images are processed and the result is temporarily stored till the time a user can download them. After which, approximately an hour later, these image files get deleted. Privacy message on the official website of remove.bg states, “We do not share your images or use them for any other purpose than removing the background and letting you download the result.” What can be expected from the next release? The next set of releases might support other kinds of images such as product images. The team at Remove.bg might also release an easy-to-use API. Users are very excited about this release and the technology used behind it. Many users are comparing it with the portrait mode on iPhone X. Though it is not that fast but users are still liking it. https://twitter.com/Baconbrix/status/1074805036264316928 https://twitter.com/hammer_flo_/status/1074914463726350336 But how strong is remove.bg with regards to privacy is a bigger question. Though the website gives a privacy note at the end but it will take more to win the user’s trust. The images uploaded to remove.bg’ cloud might be at risk. How strong is the security and what preventive measures have they taken? These are few of the questions that might bother many. To have a look at the ongoing discussion on remove.bg, check out Benjamin Groessing’s AMA twitter thread. Facebook open-sources PyText, a PyTorch based NLP modeling framework Deep Learning Indaba presents the state of Natural Language Processing in 2018 NYU and AWS introduce Deep Graph Library (DGL), a python package to build neural network graphs
Read more
  • 0
  • 0
  • 25504

article-image-macos-gets-rpcs3-and-dolphin-using-gfx-portability-the-vulkan-portability-implementation-for-non-rust-apps
Melisha Dsouza
05 Sep 2018
2 min read
Save for later

macOS gets RPCS3 and Dolphin using Gfx-portability, the Vulkan portability implementation for non-Rust apps

Melisha Dsouza
05 Sep 2018
2 min read
The Vulkan Portability implementation, gfx-portability allows non-Rust applications that use Vulkan to run with ease. After improving the functionality of gfx-portability’s Metal backend through benchmarking Dota2, and verifying certain functionalities through the Vulkan Conformance Test Suite (CTS), developers are now planning to expand their testing to other projects that are open source, already using Vulcan for rendering and finally lacking strong macOS/Metal support. The projects which matched their criteria were  RPCS3 and Dolphin. However, the team discovered various issues with both RPCS3 and Dolphin projects. RPCS3 Blockers RPCS3 satisfies all the above mentioned criteria. It is an open-source Sony PlayStation 3 emulator and debugger written in C++ for Windows and Linux. RPCS3 has a Vulkan backend, and some attempts were made to support macOS previously. The gfx-rs team added a surface and swapchain support to start of with the macOS integration. This process identified a number of blockers in both gfx-rs and RPCS3. The RPCS3 developers and the gfx-rs teams collaborated to quickly address the blockers. Once the blockers were addressed, gameplay was rendered within RPCS3. Dolphin support for macOS Dolphin, the emulator for two recent Nintendo video game consoles, was actively working on adding support for macOS. While being tested with gfx-portability the teams noticed some further minor bugs in gfx. The issues were addressed and the teams were able to render real gameplay. Continuous Releases for the masses The team has already started automatically releasing gfx-portability binaries under GitHub latest release -> the portability repository. Currently the team provides MacOS (Metal) and Linux (Vulkan) binaries, and will add Windows (Direct3D 12/11 and Vulkan) binaries soon. These releases ensure that users don’t have to build gfx-portability themselves in order to test it with an existing project. The binaries are compatible with both the Vulkan loader on macOS and by linking the binaries directly from an application.   The team was successfully able to run RPCS3 and Dolphin on top of gfx-portability’s Metal backend and only had to address some minor issues in the process. Stability and performance will improve as more real world use cases are tested. You can read more about this on gfx-rs.github.io.   OpenAI Five loses against humans in Dota 2 at The International 2018 How to use artificial intelligence to create games with rich and interactive environments [Tutorial] Best game engines for AI game development  
Read more
  • 0
  • 0
  • 25504

article-image-researchers-find-a-new-linux-vulnerability-that-allows-attackers-to-sniff-or-hijack-vpn-connections
Bhagyashree R
06 Dec 2019
3 min read
Save for later

Researchers find a new Linux vulnerability that allows attackers to sniff or hijack VPN connections

Bhagyashree R
06 Dec 2019
3 min read
On Wednesday, security researchers from the University of New Mexico disclosed a vulnerability impacting most Linux distributions and Unix-like operating systems including FreeBSD, OpenBSD, macOS, iOS, and Android. This Linux vulnerability can be exploited by an attacker to determine if a user is connected to a VPN and to hijack VPN connections. The researchers shared that this security flaw tracked as CVE-2019-14899, “allows a network adjacent attacker to determine if another user is connected to a VPN, the virtual IP address they have been assigned by the VPN server, and whether or not there is an active connection to a given website." Additionally, attackers can determine the exact sequence and acknowledgment numbers by counting encrypted packets or by examining their size. With this information in hand, they can inject arbitrary data payloads into IPv4 and IPv6 TCP streams. What systems are affected by this Linux vulnerability While testing for this vulnerability, the researchers found that it did not affect any Linux distribution prior to Ubuntu 19.10. They further noted that all distributions that use 'systemd' versions released after November 28, 2018, that have their rp_filter (reverse path filtering) set to “loose” by default are vulnerable. Here’s a non-exhaustive list of systems that the researchers found vulnerable: Ubuntu 19.10 (systemd) Fedora (systemd) Debian 10.2 (systemd) Arch 2019.05 (systemd) Manjaro 18.1.1 (systemd) Devuan (sysV init) MX Linux 19 (Mepis+antiX) Void Linux (runit) Slackware 14.2 (rc.d) Deepin (rc.d) FreeBSD (rc.d) OpenBSD (rc.d) Attacks exploiting this Linux vulnerability works against OpenVPN, WireGuard, and IKEv2/IPSec. However, the team noted they were able to make all the inferences even when the responses from the victim were encrypted. Regardless of what VPN technology you are using, the size and number of packets sent were enough to find the kind of packets are being sent through the encrypted VPN tunnel. In response to the public disclosure, Jason A. Donenfeld, the creator of the WireGuard, clarified that "this isn't a WireGuard vulnerability, but rather something in the routing table code and/or TCP code on affected operating systems." He added, “However, it does affect us, since WireGuard exists on those affected OSes.” A network security consultant Noel Kuntze also said in a reply to the disclosure report that only route-based VPN implementations are impacted by this Linux vulnerability. The researchers have also shared a few mitigation strategies including turning reverse path filtering on, using bogon filtering, and encrypting packet size and timing. You can check out the full disclosure report of this Linux vulnerability for further details. StackRox Kubernetes Security Platform 3.0 releases with advanced configuration and vulnerability management capabilities An unpatched vulnerability in NSA’s Ghidra allows a remote attacker to compromise exposed systems 10 times ethical hackers spotted a software vulnerability and averted a crisis
Read more
  • 0
  • 0
  • 25461
article-image-researchers-reveal-vulnerability-that-can-bypass-payment-limits-in-contactless-visa-card
Savia Lobo
02 Aug 2019
5 min read
Save for later

Researchers reveal vulnerability that can bypass payment limits in contactless Visa card

Savia Lobo
02 Aug 2019
5 min read
A few days ago, researchers from Positive technologies discovered flaws in Visa contactless cards, which allow hackers to bypass the payment limits. This research was conducted by two of Positive technologies’ researchers: Leigh-Anne Galloway, Cyber Security Resilience Lead and Tim Yunusov, Head of banking security. The attack was tested with “five major UK banks where it successfully bypassed the UK contactless verification limit of £30 on all tested Visa cards, irrespective of the card terminal”, the researchers mentioned. They added that the contactless Visa card vulnerability is possible on cards outside the UK as well. How to exploit this contactless Visa card vulnerability? The attack manipulates two data fields that are exchanged between the card and the terminal during a contactless payment. “Predominantly in the UK, if a payment needs an additional cardholder verification (which is required for payments over 30 pounds in the UK), cards will answer "I can’t do that," which prevents against making payments over this limit,” the researchers said. Next, the terminal uses country-specific settings, which demand that the card or mobile wallet provide additional verification of the cardholder, such as through the entry of the card PIN or fingerprint authentication on the phone. The attack could bypass both these checks using a device that intercepts communication between the card and the payment terminal. This device acts as a proxy thereby conducting a man in the middle (MITM) attack. “This attack is possible because Visa does not require issuers and acquirers to have checks in place that block payments without presenting the minimum verification,” the researchers say. “The attack can also be done using mobile wallets such as GPay, where a Visa card has been added to the wallet. Here, it is even possible to fraudulently charge up to £30 without unlocking the phone,” Positive Technologies mention in their post. One of the researchers, Yunusov said, "The payment industry believes that contactless payments are protected by the safeguards they have put in place, but the fact is that contactless fraud is increasing. While it’s a relatively new type of fraud and might not be the number one priority for banks at the moment, if contactless verification limits can be easily bypassed, it means that we could see more damaging losses for banks and their customers." A hacker can easily conduct a cardless attack Forbes explains, criminals, for instance, could take a payment from a card when the user wasn’t looking with their own mobile payments machine (though a malicious merchant would eventually be caught by banks’ fraud systems if they used the same terminal). They could even take a payment reading from a credit card using their mobile phones and send the data to another phone and make a payment on that second device going beyond the limit, the researchers claimed. “For the hack to work, all the fraudsters need is to be close to their victim,” Forbes mentions. “So that means if you found someone’s card or if someone stole your card, they wouldn’t have to know your PIN, they wouldn’t have to impersonate your signature, and they could make a payment for a much higher value,” Galloway said. According to UK Finance, fraud on contactless cards and devices increased from £6.7 million in 2016 to £14 million in 2017. £8.4 million was lost to contactless fraud in the first half of 2018. Researchers suggest that additional security should be provided by the bank issuing cards and shouldn’t rely on Visa to provide a secure protocol for payments. “Instead, issuers should have their own measures in place to detect and block this attack vector and other payment attacks,” the researchers say. Galloway says, “It falls to the customer and the bank to protect themselves. While some terminals have random checks, these have to be programmed by the merchant, so it is entirely down to their discretion.” “Because of this, we can expect to see contactless fraud continue to rise. Issuers need to be better at enforcing their own rules on contactless and increasing the industry standard. Criminals will always gravitate to the more convenient way to get money quickly, so we need to make it as difficult as possible to crack contactless,” she further adds. In the U.S., contactless card transactions are relatively rare, with only about 3 percent of cards falling into this category, CNBC reports. Researchers say the limits attackers can withdraw will differ in different countries. In the UK, they were able to make payments of £100 without any detection. Galloway says, for instance, in the U.S., it’s considerably higher at $100. What measures is Visa taking to prevent this kind of contactless fraud? Surprisingly, the company was not alarmed by this situation. In fact, Forbes reports that Visa wasn’t planning on updating their systems anytime soon. “One key limitation of this type of attack is that it requires a physically stolen card that has not yet been reported to the card issuer. Likewise, the transaction must pass issuer validations and detection protocols. It is not a scalable fraud approach that we typically see criminals employ in the real world,” a Visa spokesperson told Forbes. The company also said it was continually working on improving its fraud detection tech. https://twitter.com/a66ot/status/1155793829443842049 To know more about this news in detail, head over to Positive technologies’ official post. A vulnerability found in Jira Server and Data Center allows attackers to remotely execute code on systems VLC media player affected by a major vulnerability in a 3rd library, libebml; updating to the latest version may help A zero-day vulnerability on Mac Zoom Client allows hackers to enable users’ camera, leaving 750k companies exposed
Read more
  • 0
  • 0
  • 25417

article-image-google-project-zero-discovers-a-cache-invalidation-bug-in-linux-memory-management-ubuntu-and-debian-remain-vulnerable
Melisha Dsouza
01 Oct 2018
4 min read
Save for later

Google Project Zero discovers a cache invalidation bug in Linux memory management, Ubuntu and Debian remain vulnerable

Melisha Dsouza
01 Oct 2018
4 min read
"Raise your game on merging kernel security fixes, you're leaving users exposed for weeks" -Jann Horn to maintainers of Ubuntu and Debian Jann Horn, the Google Project Zero researcher who discovered the Meltdown and Spectre CPU flaws, is making headlines once again. He has uncovered a cache invalidation bug in the Linux kernel. The kernel bug is a cache invalidation flaw in Linux memory management that has been tagged as CVE-2018-17182. The bug has been already reported to Linux kernel maintainers on September 12. Without any delay, Linux founder, Linus Torvalds fixed this bug in his upstream kernel tree two weeks ago. It was also fixed in the upstream stable kernel releases 4.18.9, 4.14.71, 4.9.128, and 4.4.157 and  3.16.58. Earlier last week, Horn released an "ugly exploit" for Ubuntu 18.04, which "takes about an hour to run before popping a root shell". The Bug discovered by Project Zero The vulnerability is a use-after-free (UAF) attack. It works by exploiting the cache invalidation bug in the Linux memory management system, thus allowing an attacker to obtain root access to the target system. UAF vulnerabilities are a type of ‘memory-based corruption bug’. Once attackers gain access to the system, they can cause system crashes, alter or corrupt data, and gain privileged user access. Whenever a userspace page fault occurs, for instance, when a page has to be paged in on demand, the Linux kernel has to look up the Virtual Memory Area (VMA) that contains the fault address to figure out how to handle the fault. To avoid any performance hit, Linux has a fastpath that can bypass the tree walk if the VMA was recently used. When a VMA is freed, the VMA caches of all threads must be invalidated - otherwise, the next VMA lookup would follow a dangling pointer. However, since a process can have many threads, simply iterating through the VMA caches of all threads would be a performance problem. To solve this, both the struct mm_struct and the per-thread struct vmacache are tagged with sequence numbers. When the VMA lookup fastpath discovers in vmacache_valid() that current->vmacache.seqnum and current->mm->vmacache_seqnum don't match, it wipes the contents of the current thread's VMA cache and updates its sequence number. The sequence numbers of the mm_struct and the VMA cache were only 32 bits wide, meaning that it was possible for them to overflow.  To overcome this, in version 3.16, an optimization was added. However, Horn asserts that this optimization is incorrect because it doesn't take into account what happens if a previously single-threaded process creates a new thread immediately after the mm_struct's sequence number has wrapped around to zero. The bug was fixed by changing the sequence numbers to 64 bits, thereby making an overflow infeasible, and removing the overflow handling logic.   Horn has raised concerns that some Linux distributions are leaving users exposed to potential attacks by not reacting fast enough to frequently updated upstream stable kernel releases. End users of Linux distributions aren't protected until each distribution merges the changes from upstream stable kernels, and then users install that updated release. Between these two points, the issue also gets exposure on public mailing lists, giving both Linux distributions and would-be attackers a chance to take action. As of today, Debian stable and Ubuntu releases 16.04 and 18.04 have not yet fixed the issue, in spite of the latest kernel update occurring around a month earlier. This means there's a gap of several weeks between the flaw being publicly disclosed and fixes reaching end users. Canonical, the UK company that maintains Ubuntu, has responded to Horn's blog, and says fixes "should be released" around Monday, October 1. The window of exposure between the time an upstream fix is published and the time the fix actually becomes available to users is concerning. This gap could be utilized by an attacker to write a kernel exploit in the meantime. It is no secret that Linux distributions don’t publish kernel updates regularly. This vulnerability highlights the importance of having a secure kernel configuration. Looks like the team at Linux needs to check and re-check their security patches before it is made available to the public. You can head over to Google Project Zero’s official blog page for more insights on the vulnerability and how it was exploited by Jann Horn. NetSpectre attack exploits data from CPU memory SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips
Read more
  • 0
  • 0
  • 25358
Modal Close icon
Modal Close icon