Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Artificial Intelligence

61 Articles
article-image-web-applications-are-focus-of-cybercrime-gangs-in-data-breaches-report-finds-from-ai-trends
Matthew Emerick
15 Oct 2020
7 min read
Save for later

Web Applications are Focus of Cybercrime Gangs in Data Breaches, Report Finds from AI Trends

Matthew Emerick
15 Oct 2020
7 min read
By John P. Desmond, AI Trends Editor Web applications are the primary focus of many cybercrime gangs engaged in data breaches, a primary security concern to retailers, according to the 2020 Data Breach Investigations Report (DBIR) recently released by Verizon, in its 13th edition of the report. Verizon analyzed a total of 157,525 incidents; 3,950 were confirmed data breaches.  “These data breaches are the most serious type of incident retailers face. Such breaches generally result in the loss of customer data, including, in the worst cases, payment data and log-in and password combinations,” stated Ido Safruti, co-founder and chief technology officer, PerimeterX, a provider of security services for websites, in an account in Digital Commerce 360. Among the reports highlights: Misconfiguration errors, resulting from failure to implement all security controls, top the list of the fastest-growing risk to web applications. Across all industries, misconfiguration errors increased from below 20 percent in the 2017 survey to over 40 percent in the 2020 survey. “The reason for this is simple,” Safruti stated. “Web applications are growing more and more complex. What were formerly websites are now full-blown applications made up of dozens of components and leveraging multiple external services.” Ido Safruti, co-founder and chief technology officer, PerimeterX External code can typically comprise 70 percent or more of web applications, many of them JavaScript calls to external libraries and services. “A misconfigured service or setting for any piece of a web application offers a path to compromise the application and skim sensitive customer data,” Safruti stated. Cybercriminal gangs work to exploit rapid changes on web applications, as development teams build and ship new code faster and faster, often tapping third-party libraries and services. Weaknesses in version control and monitoring of changes to web applications for unauthorized introductions of code, are vulnerabilities. Magecart attacks, from a consortium of malicious hacker groups who target online shopping cart systems especially on large ecommerce sites, insert rogue elements as components of Web applications with the goal of stealing credit card data of shoppers.  “Retailers should consider advanced technology using automated and audited processes to manage configuration changes,” Safruti advises. Vulnerabilities are not patched quickly enough, leaving holes for attacks to exploit. Only half of vulnerabilities are patched within three months of discovery, the 2020 DBIR report found. These attacks offer hackers the potential of  large amounts of valuable customer information with the least amount of effort.   Attacks against web application servers made up nearly 75% of breached assets in 2019, up from roughly 50% in 2017, the DBIR report found. Organized crime groups undertook roughly two-thirds of breaches and 86% of breaches were financially motivated. The global average cost of a data breach is $3.92 million, with an average of over $8 million in the United States, according to a 2019 study from the Ponemon Institute, a research center focused on privacy, data protection and information security. Another analysis of the 2020 DBIT report found that hacking and social attacks have leapfrogged malware as the top attack tactic. “Sophisticated malware is no longer necessary to perform an attack,” stated the report in SecurityBoulevard.  Developers and QA engineers who develop and test web applications would benefit from the use of automated security testing tools and security processes that integrate with their workflow. “We believe developers and DevOps personnel are one of the weakest links in the chain and would benefit the most from remediation techniques,” the authors stated. Credential Stuffing Attack Exploit Users with Same Password Across Sites Credential stuffing is a cyberattack where lists of stolen usernames and/or email addresses are used to gain unauthorized access to user accounts through large-scale automated login requests directed against a web application.  “Threat actors are always conducting credential stuffing attacks,” found a “deep dive” analysis of the 2020 DBIR report from SpyCloud, a security firm focused on preventing online fraud.   The SpyCloud researchers advise users never to reuse passwords across online accounts. “Password reuse is a major factor in credential stuffing attacks,” the authors state. They advise using a password manager and storing a unique complex password for each account. The 2020 DBIR report found this year’s top malware variant to be password dumpers, malware that extracts passwords from infected systems. This malware is aimed at acquiring credentials stored on target computers, or involve keyloggers that acquire credentials as users enter them.  Some 22 percent of breaches found were the result of social attacks, which are cyber attacks that involve social engineering and phishing. Phishing – making fake websites, emails, text messages, and social media messages to impersonate trusted entities – is still a major way that sensitive authentication credentials are acquired illicitly, SpyCloud researchers found. Average consumers are each paying more than $290 in out-of-pocket costs and spending 16 hours to resolve the effects of this data loss and the resultant account takeover, SpyCloud found.  Business Increasing Investment in AI for Cybersecurity, Capgemini Finds To defend against the new generation of cyberattacks, businesses are increasing their investment in AI systems to help. Two-thirds of organizations surveyed by Capgemini Research last year said they will not be able to respond to critical threats without AI. Capgemini surveyed 850 senior IT executives from IT information security, cybersecurity and IT operations across 10 countries and seven business sectors. Among the highlights was that AI-enabled cybersecurity is now an imperative: Over half (56%) of executives say their cybersecurity analysts are overwhelmed by the vast array of data points they need to monitor to detect and prevent intrusion. In addition, the type of cyberattacks that require immediate intervention, or that cannot be remediated quickly enough by cyber analysts, have notably increased, including: cyberattacks affecting time-sensitive applications (42% saying they had gone up, by an average of 16%). automated, machine-speed attacks that mutate at a pace that cannot be neutralized through traditional response systems (43% reported an increase, by an average of 15%). Executives interviewed cited benefits of using AI in cybersecurity:  64% said it lowers the cost of detecting breaches and responding to them – by an average of 12%. 74% said it enables a faster response time: reducing time taken to detect threats, remedy breaches and implement patches by 12%. 69% also said AI improves the accuracy of detecting breaches, and 60% said it increases the efficiency of cybersecurity analysts, reducing the time they spend analyzing false positives and improving productivity. Budgets for AI in cybersecurity are projected to rise, with almost half (48%) of respondents said they are planning 29 percent increases in FY2020; some 73 percent were testing uses cases for AI in cybersecurity; only one in five organizations reported using AI in cybersecurity before 2019. “AI offers huge opportunities for cybersecurity,” stated Oliver Scherer, CISO of Europe’s leading consumer electronics retailer, MediaMarktSaturn Retail Group, in the Capgemini report. “This is because you move from detection, manual reaction and remediation towards an automated remediation, which organizations would like to achieve in the next three or five years.” Geert van der Linden, Cybersecurity Business Lead, Capgemini Group Barriers remain, including a lack of understanding in how to scale use cases from proof of concept to full-scale deployment.   “Organizations are facing an unparalleled volume and complexity of cyber threats and have woken up to the importance of AI as the first line of defense,” stated Geert van der Linden, Cybersecurity Business Lead at Capgemini Group. “As cybersecurity analysts are overwhelmed, close to a quarter of them declaring they are not able to successfully investigate all identified incidents, it is critical for organizations to increase investment and focus on the business benefits that AI can bring in terms of bolstering their cybersecurity.” Read the source articles in the 2020 Data Breach Investigations Report from Verizon,  in Digital Commerce 360, in SecurityBoulevard, from SpyCloud and from Capgemini Research.  
Read more
  • 0
  • 0
  • 19369

article-image-nvidia-and-ai-researchers-create-ai-agent-noise2noise-that-can-denoise-images
Richard Gall
10 Jul 2018
2 min read
Save for later

Nvidia and AI researchers create AI agent Noise2Noise that can denoise images

Richard Gall
10 Jul 2018
2 min read
Nvidia has created an an AI agent that can clean 'noisy images' - without ever having seen a 'clean' one. Working alongside AI researchers from MIT and Aalto University, they have created something they've called 'Noise2Noise'. The team's findings could, they claim, "lead to new capabilities in learned signal recovery using deep neural networks." This could have a big impact on a number of areas, including healthcare. How researchers trained the Noise2Noise AI agent The team took 50,000 images from the ImageNet database which were then manipulated to look 'noisy'. Noise2Noise then ran on these images and was able to 'denoise' them - without knowing what a clean image looked like. This is the most significant part of the research. The AI agent wan't learning from clean data, but was instead simply learning the denoising process. This is an emerging and exciting area in data analysis and machine learning. In the introduction to their recently published journal article, which coincides with a presentation at International Conference on Machine Learning in Stockholm this week the research team explain: "Signal reconstruction from corrupted or incomplete measurements is an important subfield of statistical data analysis. Recent advances in deep neural networks have sparked significant interest in avoiding the traditional, explicit a priori statistical modeling of signal corruptions, and instead learning to map corrupted observations to the unobserved clean versions." The impact and potential applications of Noise2Noise Because the Noise2Noise AI agent doesn't require 'clean data' - or the 'a priori statistical modeling of signal corruptions' - it could be applied in a number of very exciting ways. It "points the way significant benefits in many applications by removing the need for potentially strenuous collection of clean data" the team argue. One of the most interesting potential applications of the research is in the field of MRI scans. Essentially, an agent like Noise2Noise could give a much more accurate MRI scan than those done by traditional MRI scan agents which use something called Fast Fourier Transform. This could subsequently lead to a greater level of detail in MRI scans which will massively support medical professionals to make quicker diagnoses. Read next: Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads Nvidia’s Volta Tensor Core GPU hits performance milestones. But is it the best? How to Denoise Images with Neural Networks
Read more
  • 0
  • 0
  • 19135

article-image-you-can-now-make-music-with-ai-thanks-to-magenta-js
Richard Gall
04 May 2018
3 min read
Save for later

You can now make music with AI thanks to Magenta.js

Richard Gall
04 May 2018
3 min read
Google Brain's Magenta project has released Magenta.js, a tool that could open up new opportunities in developing music and art with AI. The Magenta team have been exploring a range of ways to create with machine learning, but with Magenta.js, they have developed a tool that's going to open up the very domain they've been exploring to new people. Let's take a look at how the tool works, what the aims are, and how you can get involved. How does Magenta.js work? Magenta.js is a JavaScript suite that runs on TensorFlow.js, which means it can run machine learning models in the browser. The team explains that JavaScript has been a crucial part of their project, as they have been eager to make sure they bridge the gap between the complex research they are doing and their end users. They want their research to result in tools that can actually be used. As they've said before: "...we often face conflicting desires: as researchers we want to push forward the boundaries of what is possible with machine learning, but as tool-makers, we want our models to be understandable and controllable by artists and musicians." As they note, JavaScript has informed a number of projects that have preceded Magenta.js, such as Latent Loops, Beat Blender and Melody Mixer. These tools were all built using MusicVAE, a machine learning model that forms an important part of the Magenta.js suite. The first package you'll want to pay attention to in Magenta.js is @magenta/music. This package features a number of Magenta's machine learning models for music including MusicVAE and DrumsRNN. Thanks to Magenta.js you'll be able to quickly get started. You can use a number of the project's pre-trained models which you can find on GitHub here. What next for Magenta.js? The Magenta team are keen for people to start using the tools they develop. They want a community of engineers, artists and creatives to help them drive the project forward. They're encouraging anyone who develops using Magenta.js to contribute to the GitHub repo. Clearly, this is a project where openness is going to be a huge bonus. We're excited to not only see what the Magenta team come up with next, but also the range of projects that are built using it. Perhaps we'll begin to see a whole new creative movement emerge? Read more on the project site here.
Read more
  • 0
  • 0
  • 19029

article-image-ai-tools-assisting-with-mental-health-issues-brought-on-by-pandemic-from-ai-trends
Matthew Emerick
08 Oct 2020
5 min read
Save for later

AI Tools Assisting with Mental Health Issues Brought on by Pandemic  from AI Trends

Matthew Emerick
08 Oct 2020
5 min read
By Shannon Flynn, AI Trends Contributor   The pandemic is a perfect storm for mental health issues. Isolation from others, economic uncertainty, and fear of illness can all contribute to poor mental health — and right now, most people around the world face all three.  New research suggests that the virus is tangibly affecting mental health. Rates of depression and anxiety symptoms are much higher than normal. In some population groups, like students and young people, these numbers are almost double what they’ve been in the past.  Some researchers are even concerned that the prolonged, unavoidable stress of the virus may result in people developing long-term mental health conditions — including depression, anxiety disorders and even PTSD, according to an account in Business Insider. Those on the front lines, like medical professionals, grocery store clerks and sanitation workers, may be at an especially high risk.  Use of Digital Mental Health Tools with AI on the Rise   Automation is already widely used in health care, primarily in the form of technology like AI-based electronic health records and automated billing tools, according to a blog post from ZyDoc, a supplier of medical transcription applications. It’s likely that COVID-19 will only increase the use of automation in the industry. Around the world, medical providers are adopting new tech, like self-piloting robots that act as hospital nurses. These providers are also using UV light-based cleaners to sanitize entire rooms more quickly.  Digital mental health tools are also on the rise, along with fully automated AI tools that help patients get the care they need.   The AI-powered behavioral health platform Quartet, for example, is one of several automated tools that aim to help diagnose patients, screening them for common conditions like depression, anxiety, and bipolar spectrum disorders, according to a recent account in AI Trends. Other software — like a new app developed by engineers at the University of New South Wales in Sydney, Australia — can screen patients for different mental health conditions, including dementia. With a diagnosis, patients are better equipped to find the care they need, such as from mental health professionals with in-depth knowledge of a particular condition.   Another tool, an AI-based chatbot called Woebot, developed by Woebot Labs, Inc., uses brief daily chats to help people maintain their mental health. The bot is designed to teach skills related to cognitive behavioral therapy (CBT), a form of talk therapy that assists patients with identifying and managing maladaptive thought patterns.   In April, Woebot Labs updated the bot to provide specialized COVID-19-related support in the form of a new therapeutic modality, called Interpersonal Psychotherapy (IPT), which helps users “process loss and role transition,” according to a press release from the company.  Both Woebot and Quartet provide 24/7 access to mental health resources via the internet. This means that — so long as a person has an internet connection — they can’t be deterred by an inaccessible building or lengthy waitlist.  New AI Tools Supporting Clinicians   Some groups need more support than others. Clinicians working in hospitals are some of the most vulnerable to stress and anxiety. Right now, they’re facing long hours, high workloads, and frequent potential exposure to COVID.  Developers and health care professionals are also working together to create new AI tools that will support clinicians as they tackle the challenges of providing care during the pandemic.  Kavi Misri, founder and CEO of Rose One new AI-powered mental health platform, developed by the mobile mental health startup Rose, will gather real-time data on how clinicians are feeling via “questionnaires and free-response journal entries, which can be completed in as few as 30 seconds,” according to an account in Fierce Healthcare. The tool will scan through these responses, tracking the clinician’s mental health and stress levels. Over time, it should be able to identify situations and events likely to trigger dips in mental health or increased anxiety and tentatively diagnose conditions like depression, anxiety, and trauma.  Front-line health care workers are up against an unprecedented challenge, facing a wave of new patients and potential exposure to COVID, according to Kavi Misri, founder and CEO of Rose. As a result, many of these workers may be more vulnerable to stress, anxiety and other mental health issues.   “We simply can’t ignore this emerging crisis that threatens the mental health and stability of our essential workers – they need support,” stated Misri.  Rose is also providing clinicians access to more than 1,000 articles and videos on mental health topics. Each user’s feed of content is curated based on the data gathered by the platform.  Right now, Brigham and Women’s Hospital, the second-largest teaching hospital at Harvard, is experimenting with the technology in a pilot program. If effective, the tech could soon be used around the country to support clinicians on the front lines of the crisis.  Mental health will likely stay a major challenge for as long as the pandemic persists. Fortunately, AI-powered experimental tools for mental health should help to manage the stress, depression and trauma that has developed from dealing with COVID-19.  Read the source articles and information in Business Insider, a blog post from ZyDoc, in AI Trends,  press release from Woebot Labs, and in Fierce Healthcare.   Shannon Flynn is a managing editor at Rehack, a website featuring coverage of a range of technology niches. 
Read more
  • 0
  • 0
  • 18453

article-image-handpicked-weekend-reading-1st-dec-2017
Aarthi Kumaraswamy
01 Dec 2017
1 min read
Save for later

Handpicked for your weekend Reading - 1st Dec 2017

Aarthi Kumaraswamy
01 Dec 2017
1 min read
Expert in Focus: Sebastian Raschka On how Machine Learning has become more accessible 3 Things that happened this week in Data Science News Data science announcements at Amazon re:invent 2017 IOTA, the cryptocurrency that uses Tangle instead of blockchain, announces Data Marketplace for Internet of Things Cloudera Altus Analytic DB: Modernizing the cloud-based data warehouses Get hands-on with these Tutorials Building a classification system with logistic regression in OpenCV How to build a Scatterplot in IBM SPSS Do you agree with these Insights & Opinions? Highest Paying Data Science Jobs in 2017 5 Ways Artificial Intelligence is Transforming the Gaming Industry 10 Algorithms every Machine Learning Engineer should know
Read more
  • 0
  • 0
  • 17938

article-image-bitcoin-core-escapes-a-collapse-from-a-denial-of-service-vulnerability
Savia Lobo
21 Sep 2018
2 min read
Save for later

Bitcoin Core escapes a collapse from a Denial-of-Service vulnerability

Savia Lobo
21 Sep 2018
2 min read
A few days back, Bitcoin Core developers discovered a vulnerability in its Bitcoin Core software that would have allowed a miner to insert a ‘poisoned block’ in its blockchain. This would have crashed the nodes running the Bitcoin software around the world. The software patch notes state, “A denial-of-service vulnerability (CVE-2018-17144) exploitable by miners has been discovered in Bitcoin Core versions 0.14.0 up to 0.16.2.” The developers further recommended users to upgrade any of the vulnerable versions to 0.16.3 as soon as possible. CVE-2018-17144: The denial-of-service vulnerability The vulnerability was introduced in Bitcoin Core version 0.14.0, which was first released in March 2017. But the issue wasn't found until just two days ago, prompting contributors of the codebase to take action and ultimately release a tested fix within 24 hours. In a report by The Next Web, “The bug relates to its consensus code. It meant that some miners had the option to send transaction data twice, causing the Bitcoin network to crash when attempting to validate them. As such invalid blocks need to be mined anyway, only those willing to disregard block reward of 12.5BTC ($80,000) could actually do any real damage.” Also, the bug was not only in the Bitcoin protocol but also in its most popular software implementation. Some cryptocurrencies built using Bitcoin Core’s code were also affected. For example, Litecoin patched the same vulnerability on Tuesday. However, the bitcoin is far too decentralized to be brought down by any single entity. TNW also states, “While never convenient, responding appropriately to such potential dangers is crucial to maintaining the integrity of blockchain tech – especially when reversing transactions is not an option.” This vulnerability discovery, however, was a great escape from the Bitcoin collapse. To read about this news in detail, head over to The Next Web’s full coverage. A Guide to safe cryptocurrency trading Apple changes app store guidelines on cryptocurrency mining Crypto-ML, a machine learning powered cryptocurrency platform
Read more
  • 0
  • 0
  • 17883
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-paper-two-minutes-novel-method-resource-efficient-image-classification
Sugandha Lahoti
23 Mar 2018
4 min read
Save for later

Paper in Two minutes: A novel method for resource efficient image classification

Sugandha Lahoti
23 Mar 2018
4 min read
This ICLR 2018 accepted paper, Multi-Scale Dense Networks for Resource Efficient Image Classification, introduces a new model to perform image classification with limited computational resources at test time. This paper is authored by Gao Huang, Danlu Chen, Tianhong Li, Felix Wu, Laurens van der Maaten, and Kilian Weinberger. The 6th annual ICLR conference is scheduled to happen between April 30 - May 03, 2018. Using a multi-scale convolutional neural network for resource efficient image classification What problem is the paper attempting to solve? Recent years have witnessed a surge in demand for applications of visual object recognition, for instance, in self-driving cars and content-based image search. This demand is because of the astonishing progress of convolutional networks (CNNs) where state-of-the-art models may have even surpassed human-level performance. However, most are complex models which have high computational demands at inference time. In real-world applications, computation is never free; it directly translates into power consumption, which should be minimized for environmental and economic reasons. Ideally, all systems should automatically use small networks when test images are easy or computational resources are limited and use big networks when test images are hard or computation is abundant. In order to develop resource-efficient image recognition, the authors aim to develop CNNs that slice the computation and process these slices one-by-one, stopping the evaluation once the CPU time is depleted or the classification sufficiently certain. Unfortunately, CNNs learn the data representation and the classifier jointly, which leads to two problems The features in the last layer are extracted directly to be used by the classifier, whereas earlier features are not. The features in different layers of the network may have a different scale. Typically, the first layers of deep nets operate on a fine scale (to extract low-level features), whereas later layers transition to coarse scales that allow global context to enter the classifier. The authors propose a novel network architecture that addresses both problems through careful design changes, allowing for resource-efficient image classification. Paper summary The model is based on a multi-scale convolutional neural network similar to the neural fabric, but with dense connections and with a classifier at each layer.  This novel network architecture, called Multi-Scale DenseNet (MSDNet), address both of the problems described above (of classifiers altering the internal representation and the lack of coarse-scale features in early layers) for resource-efficient image classification. The network uses a cascade of intermediate classifiers throughout the network. The first problem is addressed through the introduction of dense connectivity. By connecting all layers to all classifiers, features are no longer dominated by the most imminent early exit and the trade-off between early or later classification can be performed elegantly as part of the loss function. The second problem is addressed by adopting a multi-scale network structure. At each layer, features of all scales (fine-to-coarse) are produced, which facilitates good classification early on but also extracts low-level features that only become useful after several more layers of processing. Key Takeaways MSDNet, is a novel convolutional network architecture optimized to incorporate CPU budgets at test-time. The design is based on two high-level design principles, to generate and maintain coarse level features throughout the network and to interconnect the layers with dense connectivity. The final network design is a two-dimensional array of horizontal and vertical layers, which decouples depth and feature coarseness. Whereas in traditional convolutional networks features only become coarser with increasing depth, the MSDNet generates features of all resolutions from the first layer on and maintains them throughout. Through experiments, the authors show that their network outperforms all competitive baselines on an impressive range of budgets ranging from highly limited CPU constraints to almost unconstrained settings. Reviewer feedback summary Overall Score: 25/30 Average Score: 8.33 The reviewers found the approach to be natural and effective with good results. They found the presentation to be clear and easy to follow. The structure of the network was clearly justified. The reviewers found the use of dense connectivity to avoid the loss of performance of using early-exit classifier interesting. They appreciated the results and found them to be quite promising, with 5x speed-ups and same or better accuracy than previous models.  However, some reviewers pointed out that the results about the more efficient densenet* could be shown in the main paper.
Read more
  • 0
  • 0
  • 17433

article-image-nvidia-volta-tensor-core-gpu-hits-performance-milestones
Richard Gall
08 May 2018
3 min read
Save for later

Nvidia's Volta Tensor Core GPU hits performance milestones. But is it the best?

Richard Gall
08 May 2018
3 min read
Nvidia has revealed that its Volta Tensor Core GPU has hit some significant milestones in performance. This is big news for the world of AI. It raises the bar in terms of the complexity and sophistication of the deep learning models that can be built. The Volta Tensor Core GPU has, according to the Nvidia team, has "achieved record-setting ResNet-50 performance for a single chip and single server" thanks to the updates and changes they have made. Here are the headline records and milestones the Volta Tensor Core GPU has hit, according to the team's intensive and rigorous testing: When it trains a ResNet-50, one V100 TensorCore GPU can achieve more than 1,075 images every second. That is apparently four times more than the Pascal GPU, the previous generation of Nvidia's GPU microarchitecture. Last year, one DGX-1 server supported by 8 TensorCore V100s could achieve 4,200 images a second (still a hell of a lot). Now it can achieve 7,850. One AWS P3 cloud instance supported by 8 TensorCore V100s Res-Net50 in less than 3 hours. That's three times faster than on a single TPU. But what do these advances in performance mean in practice? And has Nvidia really managed to outperform its competitors? Volta Tensor Core GPUs might not be as fast as you think Nvidia is clearly pretty excited about what it has achieved. Certainly the power of the Volta Tensor Core GPUs are impressive and not to be sniffed at. But website ExtremeTech poses a caveat. The piece argues that there are problems with using FLOPS ( floating point operations per second) as a metric for performance. This is because the mathematical formula that's used to calculate FLOPs assumes a degree of consistency in how something is processed that may be misleading. One GPU, for example, might have higher potential FLOPS but not be running at capacity. It could, of course be outperformed by an 'inferior' GPU. Other studies (this one from RiseML) have indicated that Google's TPU actually performs better than Nvidia's offering (when using a different test). Admittedly the difference wasn't huge, but enough when you consider that it's significantly cheaper than the Volta. Ultimately, the difference between the two is as much about what you want from your GPU or TPU. Google might give you a little more power but there's much less flexibility than you get with the Volta. It will be interesting to see how the competition changes over the next few years. Based on current form Nvidia and Google are going to be leading the way for some time, whoever has bragging rights about performance. Distributed TensorFlow: Working with multiple GPUs and servers Nvidia Tesla V100 GPUs publicly available in beta on Google Compute Engine and Kubernetes Engine OpenAI announces block sparse GPU kernels for accelerating neural networks
Read more
  • 0
  • 0
  • 17093

article-image-thanks-deepcode-ai-can-help-you-write-cleaner-code
Richard Gall
30 Apr 2018
2 min read
Save for later

Thanks to DeepCode, AI can help you write cleaner code

Richard Gall
30 Apr 2018
2 min read
DeepCode is a tool that uses artificial intelligence to help software engineers write cleaner code. It's a bit like Grammarly or the Hemingway Editor, but for code. It works in an ingenious way. Using AI, it reads your GitHub repositories and highlights anything that might be broken or cause compatibility issues. It is currently only available for Java, JavaScript, and Python, but more languages are going to be added. DeepCode is more than a debugger Sure, DeepCode might sound a little like a glorified debugger. But it's important to understand it's much more than that. It doesn't just correct errors, it can actually help you to improve the code you write. That means the project's mission isn't just code that works, but code that works better. It's thanks to AI that DeepCode is able to support code performance too - the software learns 'rules' about how code works best. And because DeepCode is an AI system, it's only going to get better as it learns more. Speaking to TechCrunch, Boris Paskalev claimed that DeepCode has more than 250,000 rules. This is "growing daily." Paskalev went on to explain: "We built a platform that understands the intent of the code... We autonomously understand millions of repositories and note the changes developers are making. Then we train our AI engine with those changes and can provide unique suggestions to every single line of code analyzed by our platform.” DeepCode is a compelling prospect for developers. As applications become more complex, and efficiency becomes increasingly more important, a simple solution to unlocking greater performance could be invaluable. It's no surprise that it has already raised 1.1 milion in investment from VC company btov. It's only going to become more popular with investors as the popularity of the platform grows. This might mean the end of spaghetti code, which can only be a good thing. Find out more about DeepCode and it's pricing here. Read more: Active Learning: An approach to training machine learning models efficiently
Read more
  • 0
  • 0
  • 17048

article-image-microsoft-start-ai-school-to-teach-machine-learning-and-artificial-intelligence
Amey Varangaonkar
25 Jun 2018
3 min read
Save for later

Microsoft start AI School to teach Machine Learning and Artificial Intelligence

Amey Varangaonkar
25 Jun 2018
3 min read
The race for cloud supremacy is getting interesting with every passing day. The three major competitors - Amazon, Google and Microsoft seem to be coming up with fresh and innovative ideas to attract customers, making them try and adopt their cloud offerings. The most recent dice was thrown by Google - when they announced their free Big Data and Machine Learning training courses for the Google Cloud Platform. These courses allowed the students to build intelligent models on the Google cloud using the cloud-powered resources. Microsoft have now followed suit with their own AI School - the promise of which is quite similar: Allowing professionals to build smart solutions for their businesses using the Microsoft AI platform on Azure. AI School: Offering custom learning paths to master Artificial Intelligence Everyone has a different style and pace of learning. Keeping this in mind, Microsoft have segregated their learning material into different levels - beginner, intermediate and advanced. This helps the intermediate and advanced learners pick up the relevant topics they want to skill up in, without having to compulsorily go through the basics - yet giving them the option to do so in case they’re interested. The topic coverage in the AI School is quite interesting as well - from introduction to deep learning and Artificial Intelligence to building custom conversational AI. In the process, the students will be using a myriad of tools such as Azure Cognitive Services and Microsoft Bot framework for pre-trained AI models, Azure Machine Learning for deep learning and machine learning capabilities as well as Visual Studio and Cognitive Toolkit. The students will have the option of working with their favourite programming language as well - from Java, C# and Node.js to Python and JavaScript. The end goal of this program, as Microsoft puts it perfectly, is to empower the developers to use the trending Artificial Intelligence capabilities within their existing applications to make them smarter and more intuitive. All this while leveraging the power of the Microsoft cloud. Google and Microsoft have stepped up, time for Amazon now? Although Amazon does provide training and certifications for Machine Learning and AI, they are yet to launch their own courses to encourage learners to learn these trending technologies from scratch, and adopt AWS to build their own intelligent models. Considering they dominate the cloud market with almost 2/3rds of the market share, this is quite surprising. Another interesting point to note here is that Microsoft and Google have both taken significant steps to contribute to open source and free learning. While Google-acquired Kaggle is a great platform to host machine learning competitions and thereby learn new, interesting things in the AI space, Microsoft’s recent acquisition of GitHub takes them in the similar direction of promoting the open source culture and sharing free knowledge. Is Amazon waiting for a similar acquisition before they take this step in promoting open source learning? We will have to wait and see.
Read more
  • 0
  • 0
  • 16445
article-image-ai-chipmaking-startup-graphcore-raises-200m-from-bmw-microsoft-bosch-dell
Melisha Dsouza
18 Dec 2018
2 min read
Save for later

AI chipmaking startup ‘Graphcore’ raises $200m from BMW, Microsoft, Bosch, Dell

Melisha Dsouza
18 Dec 2018
2 min read
Today, Graphcore, a UK-based chipmaking startup has raised $200m in a series D funding round from investors including Microsoft and BMW, valuing the company at $1.7bn. This new funding brings the total capital raised by Graphcore to date to more than $300m. The funding round was led by U.K.venture capital firm Atomico and Sofina, with participation from the biggest names in the AI and machine learning industry like Merian Global Investors, BMW iVentures, Microsoft, Amadeus Capital Partners, Robert Bosch Venture Capital, Dell Technologies Capital, amongst many others. The company intends to use the funds generated to execute on its product roadmap, accelerate scaling and expand its global presence. Graphcore, which designs chips purpose-built for artificial intelligence, is attempting to create a new class of chips that are better able to deal with the huge amounts of data needed to make AI computers. The company is ramping up production to meet customer demand for its Intelligence Processor Unit (UPU) PCIe processor cards, the first to be designed specifically for machine intelligence training and inference. Mr. Nigel Toon, CEO, and co-founder, Graphcore said that Graphcore’s processing units can be used for both the training and deployment of machine learning systems, and they were “much more efficient”. Tobias Jahn, principal at BMW i Ventures stated that Graphcore’s technology "is well-suited for a wide variety of applications from intelligent voice assistants to self-driving vehicles.” Last year the company raised $50 million from investors including Demis Hassabis, co-founder of DeepMind; Zoubin Ghahramani of Cambridge University and chief scientist at Uber, Pieter Abbeel from UC Berkeley, and Greg Brockman, Scott Grey and Ilya Sutskever, from OpenAI. Head over to Graphcore’s official blog for more insights on this news. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report NVIDIA makes its new “brain for autonomous AI machines”, Jetson AGX Xavier Module, available for purchase NVIDIA demos a style-based generative adversarial network that can generate extremely realistic images; has ML community enthralled
Read more
  • 0
  • 0
  • 16205

article-image-amazon-admits-that-facial-recognition-technology-needs-to-be-regulated
Richard Gall
08 Feb 2019
4 min read
Save for later

Amazon admits that facial recognition technology needs to be regulated

Richard Gall
08 Feb 2019
4 min read
The need to regulate facial recognition technology has been a matter of debate for the last year. Since news that Amazon had sold its facial recognition product Rekognition to a number of law enforcement agencies in the U.S. in the first half of 2018, criticism of the technology has been constant. It has arguably become the focal point for the ongoing discussion about the relationship between tech and government. Despite months of criticism and scrutiny - from inside and outside the company - Amazon's leadership has said it, too, believes that facial recognition technology needs to be regulated. In a blog post published yesterday, Michael Punke, VP of Public Policy at AWS (and author of The Revenant, trivia fans), clarified Amazon's position on the use and abuse of Rekognition. He also offered some guidelines that he argued should be followed when using facial recognition technologies to protect against misuse. Michael Punke defends Rekognition Punke initially takes issue with some of the tests done by the likes of ACLU, which found that the tool matched 28 members of Congress with mugshots. Tests like this are misleading, Punke claims, because "the service was not used properly... When we’ve re-created their tests using the service correctly, we’ve shown that facial recognition is actually a very valuable tool for improving accuracy and removing bias when compared to manual, human processes." Punke also highlights that where Rekognition has been used by law enforcement agencies, Amazon has not "received a single report of misuse." Nevertheless, he goes on to mphasise that Amazon does indeed accept the need for regulation. This suggests that in spite of its apparent success, there has been an ongoing conversation on the topic inside AWS. Managing public perception was likely an important factor here. "We’ve talked to customers, researchers, academics, policymakers, and others to understand how to best balance the benefits of facial recognition with the potential risks," he writes. Out of these guidelines, Punke explains, Amazon has developed its own set of guidelines for how Rekognition should be used. Amazon's proposed guidelines for facial recognition technology Punke - and by extension Amazon - argues that, first and foremost, facial recognition technology must be used in accordance with the law. He stresses that this includes any civil rights legislation designed to protect vulnerable and minority groups. "Our customers are responsible for following the law in how they use the technology," he writes. He also points out that that Amazon already has a policy forbidding the illegal use of its products - the AWS Acceptable Use policy. This does, of course, only go so far. Punke seems well aware of this, however, writing that Amazon "have and will continue to offer our support to policymakers and legislators in identifying areas to develop guidance or legislation to clarify the proper application of those laws." Human checks and transparency Beyond this basic point, there are a number of other guidelines specified by Punke. These are mainly to do with human checks and transparency. Punke writes that when facial recognition technology is used by law enforcement agencies, human oversight is required to act as a check on the algorithm. This is particularly important when the use of facial recognition technology could violate an individual's civil liberties. Put simply, the deployment of any facial recognition technology requires human judgement at every stage. However, Punke does provide a caveat to this, saying that a 99% confidence threshold should be met in cases where facial recognition could violate someone's civil liberties. However, he stresses that the technology should only ever be one component within a given investigation. It shouldn't be the "sole determinant" in an investigation. Finally, Punke stresses the importance of transparency. This means two things: law enforcement agencies being transparent in how they actually use facial recognition technology, and physical public notices when facial recognition technology could be used in a surveillance context. What does it all mean? In truth, Punke's blog post doesn't really mean that much. The bulk of it is, after all, about actions Amazon is already taking, and conversations it claims are ongoing. But it does tell us that Amazon can see trouble is brewing and that it wants to control the narrative when it comes to facial recognition technology. "New technology should not be banned or condemned because of its potential misuse," Punke argues - a point which sounds reasonable but fails to properly engage with the reality that potential misuse outweighs usefulness, especially in the hands of government and law enforcement.
Read more
  • 0
  • 0
  • 15890

article-image-deepmind-artificial-intelligence-can-spot-over-50-sight-threatening-eye-diseases-with-expert-accuracy
Sugandha Lahoti
14 Aug 2018
3 min read
Save for later

DeepMind Artificial Intelligence can spot over 50 sight-threatening eye diseases with expert accuracy

Sugandha Lahoti
14 Aug 2018
3 min read
DeepMind Health division has achieved a major milestone by developing an artificial intelligence system that can detect over 50 sight-threatening eye diseases with the accuracy of an expert doctor. This system can quickly interpret eye scans and correctly recommend how patients should be referred for treatment. It is the result of a collaboration with Moorfields Eye Hospital; the partnership was announced in 2016 to jointly address some of the current eye conditions. How Artificial Intelligence beats current OCT scanners Currently, eyecare doctors use optical coherence tomography (OCT) scans to help diagnose eye conditions. OCT scans are often hard to read and require time to be interpreted by experts. The time required can cause long delays between scan and treatment, which can be troublesome if someone needs urgent care. Deepmind’s AI system can automatically detect the features of eye diseases within seconds. It can also prioritize patients by recommending whether they should be referred for treatment urgently. System architecture The system uses an easily interpretable representation sandwiched between two different neural networks. The first neural network, known as the segmentation network, analyses the OCT scan and provides a map of the different types of eye tissue and the features of the disease it observes. The second network, known as the classification network, analyses the map to present eyecare professionals with diagnoses and a referral recommendation. The system expresses the referral recommendation as a percentage, allowing clinicians to assess the system’s confidence. AI-powered dataset DeepMind has also developed one of the best AI-ready databases for eye research in the world. The original dataset held by Moorfields was suitable for clinical use, but not for machine learning research. The improved database is a non-commercial public asset owned by Moorfield. It is currently being used by hospital researchers for nine separate studies into a wide range of conditions. DeepMind’s initial research is yet to turn into a usable product and then undergo rigorous clinical trials and regulatory approval before being used in practice. Once validated for general use, the system would be used for free across all 30 of Moorfields’ UK hospitals and community clinics, for an initial period of five years. You can read more about the announcement on the DeepMind Health blog. You can also read the paper on Nature Medicine. Reinforcement learning optimizes brain cancer treatment to improve patient quality of life. AI beats Chinese doctors in a tumor diagnosis competition. 23andMe shares 5mn client genetic data with GSK for drug target discovery
Read more
  • 0
  • 0
  • 15876
article-image-google-cloud-next-fei-fei-li-reveals-new-ai-tools-for-developers
Richard Gall
25 Jul 2018
3 min read
Save for later

Google Cloud Next: Fei-Fei Li reveals new AI tools for developers

Richard Gall
25 Jul 2018
3 min read
AI was always going to be a central theme of this year's Google Cloud Next, and the company hasn't disappointed. In a blog post, Fei-Fei Li, Chief Scientist at Google AI, has revealed a number of new products that will make AI more accessible for developers. Expanding Cloud AutoML [caption id="attachment_21059" align="alignright" width="300"] Fei-Fei Li at Ai for Good in 2017 (via commons.wikimedia.org)[/caption] In her blog post, Li notes that there is a "significant gap" in the machine learning world. On the one hand data scientists build solutions from the ground up, while on the other, pre-trained solutions can deliver immediate results with little work from engineers. With Cloud AutoML Google has made a pitch to the middle ground: those that require more sophistication that pre-built models, but don't have the resources to build a system from scratch. Li provides detail on a number of new developments within the Cloud AutoML project, that are being launched as part of Google Cloud Next. This includes AutoML Vision, which "extends the Cloud Vision API to recognize entirely new categories of images." It also includes two completely new language-related machine learning tools: AutoML Natural Language and AutoML Translation. AutoML Natural Language will allow users to perform Natural Language Processing - this could, for example, help organizations manage content at scale. AutoML Translation meanwhile could be particularly useful for organizations looking to go global with content distribution and marketing. Improvements to Google machine learning APIs Li also revealed that Google are launching updates to a number of key updates to APIs: The Google Cloud Vision API "now recognizes handwriting, supports additional file types (PDF and TIFF) and product search, and can identify where an object is located within an image" according to Li. The Cloud Text-to-Speech and Cloud Speech-to-Text also have updates that build in greater sophistication in areas such as translation, for example. Bringing AI to customer service with Contact Center AI The final important announcement by Li centers on conversational UI using AI. Part of this was an update to Diagflow Enterprise Edition, a Google-owned tool that makes building conversational UI easier. Text to speech capabilities have been added to the tool alongside its speech to text capability, which came with its launch in November 2017. But the big reveal is Contact Center AI. This builds on Diagflow and is essentially a complete customer service AI solution. Contact Center AI bridges the gap between virtual assistant and human custom service representative, supporting the entire journey from customer query to resolution. It has the potential to be a game changer when it comes to customer support. Read next: Decoding the reasons behind Alphabet’s record high earnings in Q2 2018 Google Cloud Launches Blockchain Toolkit to help developers build apps easily Google’s Daydream VR SDK finally adds support for two controllers
Read more
  • 0
  • 0
  • 15287

article-image-amazon-tried-to-sell-its-facial-recognition-technology-to-ice-in-june-emails-reveal
Richard Gall
24 Oct 2018
3 min read
Save for later

Amazon tried to sell its facial recognition technology to ICE in June, emails reveal

Richard Gall
24 Oct 2018
3 min read
It has emerged that Amazon representatives met with Immigrations and Customs Enforcement (ICE) this Summer in a bid to sell its facial recognition tool Rekognition. Emails obtained by The Daily Beast show that officials from Amazon met with ICE on June 12 in Redwood City. In that meeting, Amazon outlined some of AWS capabilities, stating that "we are ready and willing to help support the vital HSI [Homeland Security Investigations] mission." The emails (which you can see for yourself here) also show that Amazon were keen to set up a "workshop" with U.S. Homeland Security, and "a meeting to review the process in more depth and help assess your target list of 'Challenges [capitalization intended]'." What these 'Challenges' are referring to exactly is unclear. The controversy around Amazon's Rekognition tool These emails will only serve to increase the controversy around Rekognition and Amazon's broader involvement with security services. Earlier this year the ACLU (American Civil Liberties Union) revealed that a small number of law enforcement agencies were using Rekognition for various purposes. Later, in July, the ACLU published the results of its own experiment with Rekognition in which it incorrectly matched mugshots with 28 Congress members. Amazon responded to this research with a rebuttal on the AWS blog. In it, the Dr. Matt Wood stated that "machine learning is a very valuable tool to help law enforcement agencies, and while being concerned it’s applied correctly, we should not throw away the oven because the temperature could be set wrong and burn the pizza." This post was referenced in the email correspondence between Amazon and ICE. Clearly, the issue of accuracy was an issue in the company's discussion with security officials. The controversy continued this month after an employee published an anonymous letter on Medium, urging the company not to sell Rekognition to police. They wrote: "When a company puts new technologies into the world, it has a responsibility to think about the consequences." Amazon claims Rekognition isn't a surveillance service We covered this story on the Packt Hub last week. Following publication, an Amazon PR representative contacted us, stating that  "Amazon Rekognition is NOT a surveillance service" [emphasis the writer's, not mine]. The representative also cited the post mentioned above by Dr. Matt Wood, keen to tackle some of the challenges presented by the ACLU research. Although Amazon's position is clear, it will be difficult for the organization to maintain that line given these emails. Separating the technology from its deployment is all well and good until its clear that you're courting the kind of deployment for which you are being criticised. Note 10.30.2018 - Amazon spokesperson responded with a comment, wishing to clarify the events described from its perspective: “We participated with a number of other technology companies in technology “boot camps” sponsored by McKinsey Company, where a number of technologies were discussed, including Rekognition. As we usually do, we followed up with customers who were interested in learning more about how to use our services (Immigration and Customs Enforcement was one of those organizations where there was follow-up discussion).”
Read more
  • 0
  • 0
  • 14734
Modal Close icon
Modal Close icon