Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Artificial Intelligence

170 Articles
article-image-heres-how-you-can-handle-the-bias-variance-trade-off-in-your-ml-models
Savia Lobo
22 Jan 2018
8 min read
Save for later

Here's how you can handle the bias variance trade-off in your ML models

Savia Lobo
22 Jan 2018
8 min read
Many organizations rely on machine learning techniques in their day-today workflow, to cut down on the time required to do a job. The reason why these techniques are robust is because they undergo various tests in order to carry out correct predictions about any data fed into them. During this phase, there are also certain errors generated, which can lead to an inconsistent ML model. Two common errors that we are going to look at in this article are that of bias and Variance, and how a trade-off can be achieved between the two in order to generate a successful ML model.  Let’s first have a look at what creates these kind of errors. Machine learning techniques or more precisely supervised learning techniques involve training, often the most important stage in the ML workflow. The machine learning model is trained using the training data. How is this training data prepared? This is done by using a dataset for which the output of the algorithm is known. During the training stage, the algorithm analyzes the training data that is fed and produces patterns which are captured within an inferred function. This inferred function, which is derived after analysis of the training dataset, is the model that would be further used to map new examples. An ideal model generated from this training data should be able to generalize well. This means, it should learn from the training data and should correctly predict or classify data within any new problem instance. In general, the more complex the model is, the better it classifies the training data. However, if the model is too complex i.e it will pick up random features i.e. noise in the training data, this is the case of overfitting i.e. the model is said to overfit . On the other hand, if the model is not so complex, or missing out on important dynamics present within the data, then it is a case of underfitting. Both overfitting and underfitting are basically errors in the ML models or algorithms. Also, it is generally impossible to minimize both these errors at the same time and this leads to a condition called as the Bias-Variance Tradeoff. Before getting into knowing how to achieve the trade-off, lets simply understand how bias and variance errors occur. The Bias and Variance Error Let’s understand each error with the help of an example. Suppose you have 3 training datasets say T1, T2, and T3, and you pass these datasets through a supervised learning algorithm. The algorithm generates three different models say M1, M2, and M3 from each of the training dataset. Now let’s say you have a new input A. The whole idea is to apply each model on this new input A. Here, there can be two types of errors that can occur. If the output generated by each model on the input A is different(B1, B2, B3), the algorithm is said to have a high Variance Error. On the other hand, if the output from all the three models is same (B) but incorrect, the algorithm is said to have a high Bias Error. High Variance also means that the algorithm produces a model that is too specific to the training data, which is a typical case of Overfitting. On the other hand, high bias means that the algorithm has not picked up defining patterns from the dataset, this is a case of Underfitting. Some examples of high-bias ML algorithms are: Linear Regression, Linear Discriminant Analysis and Logistic Regression Examples of high-variance Ml algorithms are: Decision Trees, k-Nearest Neighbors and Support Vector Machines.  How to achieve a Bias-Variance Trade-off? For any supervised algorithm, having a high bias error usually means it has low variance error and vise versa. To be more specific, parametric or linear ML algorithms often have a high bias but low variance. On the other hand, non-parametric or non-linear algorithms have vice versa. The goal of any ML model is to obtain a low variance and a low bias state, which is often a task due to the parametrization of machine learning algorithms. So how can we achieve a trade-off between the two? Following are some ways to achieve the Bias-Variance Tradeoff: By minimizing the total error: The optimum location for any model is the level of complexity at which the increase in bias is equivalent to the reduction in variance. Practically, there is no analytical method to find the optimal level. One should use an accurate measure for error prediction and explore different levels of model complexity, and then choose the complexity level that reduces the overall error. Generally resampling based measures such as cross-validation should be preferred over theoretical measures such as Aikake's Information Criteria. Source: http://scott.fortmann-roe.com/docs/BiasVariance.html (The irreducible error is the noise that cannot be reduced by algorithms but can be reduced with better data cleaning.) Using Bagging and Resampling techniques: These can be used to reduce the variance in model predictions. In bagging (Bootstrap Aggregating), several replicas of the original dataset are created using random selection with replacement. One modeling algorithm that makes use of bagging is Random Forests. In Random Forest algorithm, the bias of the full model is equivalent to the bias of a single decision tree--which itself has high variance. By creating many of these trees, in effect a "forest", and then averaging them the variance of the final model can be greatly reduced over that of a single tree. Adjusting minor values in algorithms: Both the k-nearest algorithms and Support Vector Machines(SVM) algorithms have low bias and high variance. But the trade-offs in both these cases can be changed. In the K-nearest algorithm, the value of k can be increased, which would simultaneously increase the number of neighbors that contribute to the prediction. This in turn would increase the bias of the model. Whereas, in the SVM algorithm, the trade-off can be changed by an increase in the C parameter that would influence the violations of the margin allowed in the training data. This will increase the bias but decrease the variance. Using a proper Machine learning workflow: This means you have to ensure proper training by: Maintaining separate training and test sets - Splitting the dataset into training (50%), testing(25%), and validation sets ( 25%). The training set is to build the model, test set is to check the accuracy of the model, and the validation set is to evaluate the performance of your model hyperparameters. Optimizing your model by using systematic cross-validation - A cross-validation technique is a must to fine tune the model parameters, especially for unknown instances. In supervised machine learning, validation or cross-validation is used to find out the predictive accuracy within various models of varying complexity, in order to find the best model.For instance, one can use the k-fold cross validation method. Here, the dataset is divided into k folds. For each fold, train the algorithm on k-1 folds iteratively, using the remaining fold(also called as 'holdout fold')as the test set. Repeat this process until each k has acted as a test set. The average of the k recorded errors is called as the cross validation error and can serve as the performance metric for the model.   Trying out appropriate algorithms - Before relying on any model we need to first ensure that the model works best for our assumptions. One can make use of the No Free Lunch theorem, which states that one model can not work for only one problem. For instance, while using No Free lunch theorem, a random search will do the same as any of the heuristic optimization algorithms.   Tuning the hyperparameters that can give an impactful performance - Any machine learning model requires different hyperparameters such as constraints, weights or learning rates for generalizing different data patterns. Tuning these hyperparameters is necessary so that the model can optimally solve machine learning problems. Grid search and randomized search are two such methods practiced for hyperparameter tuning. So, we have listed some of the ways where you can achieve trade-off between the two. Both bias and variance are related to each other, if you increase one the other decreases and vice versa. By a trade-off, there is an optimal balance in the bias and variance which gives us a model that is neither underfit nor overfit. And finally, the ultimate goal of any supervised machine algorithm lies in isolating the signal from the dataset, and making sure that it eliminates the noise.  
Read more
  • 0
  • 0
  • 25634

article-image-5-cool-ways-transfer-learning-used-today
Savia Lobo
15 Nov 2017
7 min read
Save for later

5 cool ways Transfer Learning is being used today

Savia Lobo
15 Nov 2017
7 min read
Machine learning has gained a lot of traction over the years because of the predictive solutions that it provides, including the development of intelligent, and reliable models. However, training the models is a laborious task because it takes time to curate the labeled data within the model and then to get the model ready. Reducing the time involved in training and labeling can be overcome by using the novel approach of Transfer Learning - a smarter and effective form of machine learning, where you can use the learnings of one scenario and apply that learning to a different but related problem. How exactly does Transfer Learning work? Transfer learning reduces the efforts to build a model from scratch by using the fundamental logic or base algorithms within one domain and applying it to another. For instance, in the real-world, the balancing logic learned while riding a bicycle can be transferred to learn driving other two-wheeled vehicles. Similarly, in the case of machine learning, transfer learning can be used to transfer the algorithmic logic from one ML model to the other. Let’s look into some of the possible use cases of transfer learning. [dropcap]1[/dropcap] Real-world Simulations Digital simulation is better than creating a physical prototype for real-world implementations. Training a robot in the real-world surroundings is both time and cost consuming. In order to minimize this, robots can now be trained using simulation and the knowledge acquired can be thus transferred onto a real-world robot. This is done using progressive networks, which are ideal for a simulation to the real world transfer of policies in robot control domains. These networks consist of essential features for learning numerous tasks in sequence while enabling transfer and are resistant to catastrophic forgetting--a tendency of Artificial Neural Networks(ANNs) to completely forget previously learned information, on learning a new information.   Another application of simulation can be seen while training self-driving cars, which are trained using simulations through video games. Udacity has open sourced its self-driving car simulator which allows training self-driving cars through GTA 5 and many other video games. However, not all features of a simulation are replicated successfully when they are brought into the real world, as the interactions in the real world are more complex.   [dropcap]2[/dropcap] Gaming The adoption of Artificial Intelligence has taken gaming to an altogether new level. DeepMind’s neural network program AlphaGo is a testament to this, as it successfully defeated a professional Go player. AlphaGo is a master in Go but fails when tasked to play other games. This is because its algorithm is tailored to play Go. So, the disadvantage of using ANNs in gaming is that they cannot master all games as a human brain does. In order to do this, AlphaGo has to totally forget Go and adapt itself to the new algorithms and techniques of the new game. With transfer Learning, the tactics learned in a game can be reapplied to play another game.   An example of how Transfer learning is implemented in gaming can be seen in MadRTS, a commercial Real Time Strategy games. MadRTS, is developed to carry out military simulations. MadRTS uses CARL(CAse-based Reinforcement Learner), a multi-tiered architecture which combines Case-based reasoning(CBR) and Reinforcement Learning(RL). CBR provides an approach to tackle unseen but related problems based on past experiences within each level of the game. RL algorithms, on the other hand, allow the model to carry out good approximations to a situation, based on the agent’s experience in its environment--also known as Markov’s Decision Process. These CBR/RL transfer learning agents are evaluated in order to perform effective learning on tasks given in MadRTS, and should be able to learn better across tasks by transferring experience. [dropcap]3[/dropcap] Image Classification Neural networks are experts in recognizing objects within an image as they are trained on huge datasets of labeled images, which is time-consuming. How transfer learning helps here is, it reduces the time to train the model by pre-training the model using ImageNet, which contains millions of images from different categories. Let’s assume that a convolutional neural network - for instance, a VGG-16 ConvNet - has to be trained to recognize images within a dataset. Firstly, it is pre-trained using ImageNet. Then, it is trained layer-wise starting by replacing the final layer with a softmax layer and training it until the training saturates. Further, the other dense layers are trained progressively. By the end of the training, the ConvNet model is successful in learning to detect images from the dataset provided. In cases where the dataset is not similar to the pre-trained model data, one can finetune weights in the higher layers of the ConvNet by backpropagation methods. The dense layers contain the logic for detecting the image, thus, tuning the higher layers won’t affect the base logic. The convolutional neural networks can be trained on Keras, using Tensorflow or as a backend. An example of Image Classification can be seen in the field of medical imaging, where the convolutional model is trained on ImageNet to solve kidney detection problem in ultrasound images. [dropcap]4[/dropcap] Zero Shot translation Zero shot translation is an extended part of supervised learning, where the goal of the model is, learning to predict novel values from values that are not present in the training dataset. The prominent working example of zero shot translation can be seen in Google’s Neural Translation model(GNMT), which allows for effective cross-lingual translations. Prior to Zero shot implementation, two discrete languages had to be translated using a pivot language. For instance, to translate Korean to Japanese, Korean had to be first translated into English and then English to Japanese. Here, English is the pivot language that acts as a medium to translate Korean to Japanese. This resulted in a translated language that was full of distortions created by the first language pair. Zero shot translation rips off the need for a pivot language. It uses available training data to learn the translational knowledge applied, to translate a new language pair. Another instance of Zero shot translation can be seen in Image2Emoji, which combines visuals and texts to predict unseen emoji icons in a zero shot approach. [dropcap]5[/dropcap] Sentiment Classification Businesses can know their customers better by implementing Sentiment Analysis, which helps them to understand emotions and polarity (negative or positive) underlying the feedback and the product reviews. Analyzing sentiments for a new text corpus is difficult to build up, as training the models to detect different emotions is difficult. A solution to this is Transfer Learning. This involves training the models on any one domain, twitter feeds for instance, and fine-tuning them to another domain you wish to perform Sentiment Analysis on; say movie reviews. Here, deep learning models are trained on twitter feeds by carrying out sentiment analysis of the text corpus and also detecting the polarity of each statement. Once the model is trained on understanding emotions through polarity of the twitter feeds, its underlying language model and learned representation is transferred onto the model assigned a task to analyze sentiments within movie reviews. Here, an RNN model is trained on logistic regression techniques carried out sentiment analysis on the twitter feeds. The word embeddings and the recurrent weights learned from the source domain (twitter feeds) are re-used in the target domain (movie reviews) to classify sentiments within the latter domain. Conclusion Transfer learning has brought in a new wave of learning in machines by reusing algorithms and the applied logic, thus speeding up their learning process. This directly results in a reduction in the capital investment and also the time invested to train a model. This is why many organizations are looking forward to replicating such a learning onto their machine learning models. Also, transfer learning has been carried out successfully in the field of Image processing, Simulations, Gaming, and so on. How transfer learning affects the learning curve of machines in other sectors in the future, is worth watching out for.
Read more
  • 0
  • 0
  • 25556

article-image-what-is-automated-machine-learning
Wilson D'souza
17 Oct 2017
6 min read
Save for later

What is Automated Machine Learning (AutoML)?

Wilson D'souza
17 Oct 2017
6 min read
Are you a proud machine learning engineer who hates that the job tests your limits as a human being? Do you dread the long hours of data experimentation and data modeling that leave you high and dry? Automated Machine Learning or AutoML can put that smile back on your face. A self-replicating AI algorithm, AutoML is the latest tool that is being applied in the real world today, and AI market leaders such as Google have made a significant investment to research further in this field. AutoML has seen a steep rise in research and new tools over the last couple of years, but its recent mention during Google IO 2017 has piqued the interest of the entire developer community. What is AutoML all about and what makes it so interesting? Evolution of automated machine learning Before we try to understand AutoML, let’s look at what triggered the need for automated machine learning. Until now, building machine learning models that work in the real world has been a domain ruled by researchers, scientists, and machine learning experts. The process of manually designing a machine learning model involves several complex and time-consuming steps such as: Pre-processing data Selecting appropriate ML architecture Optimizing hyperparameters Constructing models Evaluating suitability of models Add to this, the several layers of neural networks required for an efficient ML architecture -- an n-layer neural network could result in nn potential networks. This level of complexity could be overwhelming for the millions of developers who are keen on embracing machine learning. AutoML tries to solve this problem of complexity and makes machine learning accessible to a large group of developers by automating routine but complex tasks such as the design of neural networks. Since this cuts down development time significantly and takes care of several complex tasks involved in building machine learning models, AutoML is expected to play a crucial role in bringing machine learning to the mainstream. Approaches to automating model generation   With a growing body of research, AutoML aims to automate the following tasks in the field of machine learning: Model Selection Parameter Tuning Meta Learning Ensemble Construction It does this by using a wide range of algorithms and approaches such as: Bayesian Optimization: One of the fundamental approaches for automating model generation is to use Bayesian methods for hyperparameter tuning. By modeling the uncertainty of parameter performance, different variations of the model can be explored which offers an optimal solution. Meta-learning and Ensemble Construction: To further increase AutoML efficiency, meta-learning techniques are used to find and pick optimal hyperparameter settings. These techniques can be further coupled with auto-ensemble construction techniques to create effective ensemble model from a collection of models that undergo optimization. Using these techniques, a high level of accuracy can be achieved throughout the process of automated generation of models. Genetic Programming: Certain tools like TPOT also make use of a variation of genetic programming (tree-based pipeline optimization) to automatically design and optimize ML models that offer highly accurate results for a given set of data. This approach makes use of operators at various stages of the data pipeline which are assembled together in the form of a tree-based pipeline. These are then further optimized and newer pipelines are auto-generated using genetic programming. If these weren’t enough, Google in its recent posts disclosed that they are using reinforcement learning approach to give a further push to develop efficient AutoML techniques. What are some tools in this area? Although it’s still early days, we can already see some frameworks emerging to automate the generation of your machine learning models.   Auto-sklearn: Auto-sklearn, the tool which won the ChaLearn AutoML Challenge, provides a wrapper around the popular Python library scikit-learn to automate machine learning. This is a great addition to the ever-growing ecosystem of Python data science tools. Built on top of Bayesian optimization, it takes away the hassle of algorithm selection, parameter tuning, and ensemble construction while building machine learning pipelines. With auto-sklearn, developers can create rapid iterations and refinements to their machine learning models, thereby saving a significant amount of development time. The tool is still in its early stages of development, so expect a few hiccups while using it. DataRobot: DataRobot offers a machine learning automation platform to all levels of data scientists aimed at significantly reducing the time to build and deploy predictive models. Since it’s a cloud platform it offers great power and speed throughout the process of automating the model generation process. In addition to automating the development of predictive models, it offers other useful features such as a web-based interface, compatibility with several leading tools such as Hadoop and Spark, scalability, and rapid deployment. It’s one of those few machine learning automation platforms which are ready for industry use. TPOT: TPOT is yet another Python tool meant for automated machine learning. It uses a genetic programming approach to iterate and optimize machine learning models. As in the case of auto-sklearn, TPOT is also built on top of scikit-learn. It has a growing interest level on GitHub with 2400 stars and has observed a 100% rise in the past one year alone. Its goals, however, are quite similar to those of Auto-sklearn: feature construction, feature selection, model selection, and parameter optimization. With these goals in mind, TPOT aims at building efficient machine learning systems in lesser time and with better accuracy. Will automated machine learning replace developers? AutoML as a concept is still in its infancy. But as market leaders like Google, Facebook, and others research more in this field, AutoML will keep evolving at a brisk pace. Assuming that AutoML would replace humans in the field of data science, however, is a far-fetched thought and nowhere near reality. Here is why. AutoML as a technique is meant to make the neural network design process efficient rather than replace humans and researchers in the field of building neural networks. The primary goal of AutoML is to help experienced data scientists be more efficient at their work i.e., enhance productivity by a huge margin and to reduce the steep learning curve for the many developers who are keen on designing ML models - i.e., make ML more accessible. With the advancements in this field, it’s exciting times for developers to embrace machine learning and start building intelligent applications. We see automated machine learning as a game changer with the power to truly democratize the building of AI apps. With automated machine learning, you don’t have to be a data scientist to develop an elegant AI app!
Read more
  • 0
  • 0
  • 25468

article-image-packt-explains-deep-learning-in-90-seconds
Packt Publishing
01 Mar 2016
1 min read
Save for later

Packt Explains... Deep Learning in 90 seconds

Packt Publishing
01 Mar 2016
1 min read
If you've been looking into the world of Machine Learning lately you might have heard about a mysterious thing called “Deep Learning”. But just what is Deep Learning, and what does it mean for the world of Machine Learning as a whole? Take less than two minutes out of your day to find out and fully realize the awesome potential Deep Learning has with this video today.
Read more
  • 0
  • 0
  • 25322

article-image-machine-learning-apis-for-google-cloud-platform
Amey Varangaonkar
28 Jun 2018
7 min read
Save for later

Machine learning APIs for Google Cloud Platform

Amey Varangaonkar
28 Jun 2018
7 min read
Google Cloud Platform (GCP) is considered to be one of the Big 3 cloud platforms among Microsoft Azure and AW. GCP is widely used cloud solutions supporting AI capabilities to design and develop smart models to turn your data into insights at a cheap, affordable cost. The following excerpt is taken from the book 'Cloud Analytics with Google Cloud Platform' authored by Sanket Thodge. GCP offers many machine learning APIs, among which we take a look at the 3 most popular APIs: Cloud Speech API A powerful API from GCP! This enables the user to convert speech to text by using a neural network model. This API is used to recognize over 100 languages throughout the world. It can also support filter of unwanted noise/ content from a text, under various types of environments. It supports context-awareness recognition, works on any device, any platform, anywhere, including IoT. It has features like Automatic Speech Recognition (ASR), Global Vocabulary, Streaming Recognition, Word Hints, Real-Time Audio support, Noise Robustness, Inappropriate Content Filtering and supports for integration with other APIs of GCP.  The architecture of the Cloud Speech API is as follows: In other words, this model enables speech to text conversion by ML. The components used by the Speech API are: REST API or Google Remote Procedure Call (gRPC) API Google Cloud Client Library JSON API Python Cloud DataLab Cloud Data Storage Cloud Endpoints The applications of the model include: Voice user interfaces Domotic appliance control Preparation of structured documents Aircraft / direct voice outputs Speech to text processing Telecommunication It is free of charge for 15 seconds per usage, up to 60 minutes per month. More than that will be charged at $0.006 per usage. Now, as we have learned about the concepts and the applications of the model, let's learn some use cases where we can implement the model: Solving crimes with voice recognition: AGNITIO, A voice biometrics specialist partnered with Morpho (Safran) to bring Voice ID technology into its multimodal suite of criminal identification products. Buying products and services with the sound of your voice: Another most popular and mainstream application of biometrics, in general, is mobile payments. Voice recognition has also made its way into this highly competitive arena. A hands-free AI assistant that knows who you are: Any mobile phone nowadays has voice recognition software in the form of AI machine learning algorithms. Cloud Translation API Natural language processing (NLP) is a part of artificial intelligence that focuses on Machine Translation (MT). MT has become the main focus of NLP group for many years. MT deals with translating text from the source language to text in the target language. Cloud Translation API provides a graphical user interface to translate an inputted string of a language to targeted language, it’s highly responsive, scalable and dynamic in nature. This API enables translation among 100+ languages. It also supports language detection automatically with accuracy. It provides a feature to read a web page contents and translate to another language, and need not be text extracted from a document. The Translation API supports various features such as programmatic access, text translation, language detection, continuous updates and adjustable quota, and affordable pricing. The following image shows the architecture of the translation model:  In other words, the cloud translation API is an adaptive Machine Translation Algorithm. The components used by this model are: REST API Cloud DataLab Cloud data storage Python, Ruby Clients Library Cloud Endpoints The most important application of the model is the conversion of a regional language to a foreign language. The cost of text translation and language detection is $20 per 1 million characters. Use cases Now, as we have learned about the concepts and applications of the API, let's learn two use cases where it has been successfully implemented: Rule-based Machine Translation Local Tissue Response to Injury and Trauma We will discuss each of these use cases in the following sections. Rule-based Machine Translation The steps to implement rule-based Machine Translation successfully are as follows: Input text Parsing Tokenization Compare the rules to extract the meaning of prepositional phrase Find word of inputted language to word of the targeted language Frame the sentence of the targeted language Local tissue response to injury and trauma We can learn about the Machine Translation process from the responses of a local tissue to injuries and trauma. The human body follows a process similar to Machine Translation when dealing with injuries. We can roughly describe the process as follows: Hemorrhaging from lesioned vessels and blood clotting Blood-borne physiological components, leaking from the usually closed sanguineous compartment, are recognized as foreign material by the surrounding tissue since they are not tissue-specific Inflammatory response mediated by macrophages (and more rarely by foreign-body giant cells) Resorption of blood clot Ingrowth of blood vessels and fibroblasts, and the formation of granulation tissue Deposition of an unspecific but biocompatible type of repair (scar) tissue by fibroblasts Cloud Vision API Cloud Vision API is powerful image analytic tool. It enables the users to understand the content of an image. It helps in finding various attributes or categories of an image, such as labels, web, text, document, properties, safe search, and code of that image in JSON. In labels field, there are many sub-categories like text, line, font, area, graphics, screenshots, and points. How much area of graphics involved, text percentage, what percentage of empty area and area covered by text, is there any image partially or fully mapped in web are included web contents. The document consists of blocks of the image with detailed description, properties show that the colors used in image is visualized. If any unwanted or inappropriate content is removed from the image through safe search. The main features of this API are label detection, explicit content detection, logo and landmark detection, face detection, web detection, and to extract the text the API used Optical Character Reader (OCR) and is supported for many languages. It does not support face recognition system. The architecture for the Cloud Vision API is as follows: We can summarize the functionalities of the API as extracting quantitative information from images, taking the input as an image and the output as numerics and text. The components used in the API are: Client Library REST API RPC API OCR Language Support Cloud Storage Cloud Endpoints Applications of the API include: Industrial Robotics Cartography Geology Forensics and Military Medical and Healthcare Cost: Free of charge for the first 1,000 units per month; after that, pay as you go. Use cases This technique can be successfully implemented in: Image detection using an Android or iOS mobile device Retinal Image Analysis (Ophthalmology) We will discuss each of these use cases in the following topics. Image detection using Android or iOS mobile device Cloud Vision API can be successfully implemented to detect images using your smartphone. The steps to do this are simple: Input the image Run the Cloud Vision API Executes methods for detection of Face, Label, Text, Web and Document properties Generate the response in the form of phrase or string Populate the image details as a text view Retinal Image Analysis – ophthalmology Similarly, the API can also be used to analyze retinal images. The steps to implement this are as follows: Input the images of an eye Estimate the retinal biomarkers Do the process to remove the effected portion without losing necessary information Identify the location of specific structures Identify the boundaries of the object Find similar regions in two or more images Quantify the image with retinal portion damage You can learn a lot more about the machine learning capabilities of GCP on their official documentation page. If you found the above excerpt useful, make sure you check out our book 'Cloud Analytics with Google Cloud Platform' for more information on why GCP is a top cloud solution for machine learning and AI. Read more Google announces Cloud TPUs on the Cloud Machine Learning Engine (ML Engine) How machine learning as a service is transforming cloud Google announce the largest overhaul of their Cloud Speech-to-Text  
Read more
  • 0
  • 0
  • 24932

article-image-top-6-java-machine-learningdeep-learning-frameworks-cant-miss
Kartikey Pandey
08 Dec 2017
4 min read
Save for later

Top 6 Java Machine Learning/Deep Learning frameworks you can’t miss

Kartikey Pandey
08 Dec 2017
4 min read
The data science tech market is buzzing with new and interesting Machine Learning libraries and tools almost everyday. In an increasingly growing market, it becomes difficult to choose the right tool or set of tools. More importantly, Artificial Intelligence and Deep Learning based projects require a different approach than traditional programming which makes things tricky to zero-in on one library or a framework. The choice of a framework is largely based upon the type of problem, one is expecting to solve. But there are other considerations too. Speed is one such factor that more or less would always play an important role in decision making. Other reasons could be how open-ended it is, architecture, functions, complexity of use, support for algorithms, and so on. Here, we present to you six Java libraries for your next Deep Learning and Artificial Intelligence project you shouldn’t miss if you are a Java loyalist or simply a web developer who wants to enter the world of deep learning. DeepLearning4j (DL4J) One of the first, commercial grade, and most popular deep learning frameworks developed in Java. It also supports other JVM languages (Java, Clojure, Scala). What’s interesting about the DL4J, is that it comes with an in-built GPU support for the training process. It also supports Hadoop YARN for distributed application management. It is popular for solving problems related to image recognition, fraud detection and NLP. MALLET Mallet (Machine Learning for Language Toolkit) is an open source Java Machine Learning toolkit. It supports NLP, clustering, modelling, and classification. The most important capability of Mallet is its support for a wide variety of algorithms such as Naive Bayes and Decision Trees. Another useful feature it has is topic modelling toolkit. Topic models are useful when analyzing large collections of unlabelled texts.   Massive Online Analysis (MOA) MOA is an open source data streaming and mining framework for real time analytics. It has a strong and growing community and is similar and related to Weka. It also has the ability to deal with massive data streams. Encog This framework supports a wide array of algorithms and neural networks such as Artificial Neural Network, Bayesian Network, Genetic Programming and algorithms. Neuroph Neuroph as the name suggests offers great simplicity when working on neural networks. The main USP of Neuroph is its incredibly useful GUI (Graphical User Interface) tool that helps in creating and training neural networks. Neuroph is a good choice of framework when you have a quick project on hand and you don’t want to spend hours learning the theory. Neuroph helps you quickly set up and running in putting neural networks to work for your project. Java Machine Learning Library The Java Machine Learning Library offers a great set of reference implementation of algorithms that you can’t miss for your next Machine Learning project. Some of the key highlights are support vector machines and clustering algorithms. These are a few key frameworks and tools you might want to consider when working on your next research work. The Java ML library ecosystem is vast with many tools and libraries to support, and we just touched the tip of that iceberg in this article. One particular tool that deserve an honourable mention is Environment for Developing KDD-Applications Supported by Index-Structure (ELKI). It is designed particularly with researchers and research students kept in mind. The main focus of ELKI is its broad coverage of data algorithms which makes it a natural fit for research work. What’s really important while choosing any of the above or tools outside of the list is a good understanding of the requirements and the problems you intend to solve. To reiterate, some of the key considerations to bear in mind before zeroing in on a tool would be - support for algorithms, implementation of neural networks, dataset size (small, medium, large), and speed.
Read more
  • 0
  • 0
  • 24771
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-12-ubiquitous-artificial-intelligence-powered-apps-that-are-changing-lives
Bhagyashree R
30 Aug 2018
11 min read
Save for later

12 ubiquitous artificial intelligence powered apps that are changing lives

Bhagyashree R
30 Aug 2018
11 min read
Artificial Intelligence is making it easier for people to do things every day. You can schedule your day, search for photos of loved ones, type emails on the go, or get things done with the virtual assistant. AI also provides innovative ways of tackling existing problems, from healthcare to advancing scientific discovery. According to Gartner’s Top 10 Strategic Technology Trends for 2018, the next few years will see every app, application, and service incorporating AI at some level. With major companies like Google, Amazon, IBM investing in AI and incorporating AI in their products, this statement, instead of a prediction is becoming a fact. Apple’s IPhone X comes with a Facial Recognition System, Samsung’s Bixby, Amazon’s Alexa, Google’s Google Assistant, and the recently launched Android Pie. Android Pie learns your preferences based on your usage patterns and gets better over time. It even provides you a breakdown of the time you spend on your phone. AI comes with endless possibilities, things that we used to dream of are now becoming a part of our day to day life. So, I have listed here, in no particular order, some of those innovative applications: Microsoft’s Seeing AI - Eye for the visually impaired Source: Microsoft Seeing AI is a perfect example of how technology is improving our lives. It is an intelligent camera app that uses computer vision to audibly help blind and visually impaired people to know about their surroundings. It comes with functionalities like reading out short text and documents for you, giving you description about a person, identifies currencies, colour, handwriting, light and even images in other apps using the device's camera. A data scientist named Anirudh Koul started this project (called Deep Vision earlier) to help his grandfather who was gradually losing his vision. Two breakthroughs by the Microsoft researchers facilitated him to further his idea: vision-to-language and image classification. To make the app this advance and real-time, they used the idea of making servers communicate with Microsoft Cognitive Services. This app brings in four technologies together to provide users with an array of functionalities: OCR, barcode scanner, facial recognition, and scene recognition. Check out this YouTube tutorial to understand how it works. Download App Store Ada - Healthcare in your hand Source: Digital Health Ada, with a very simple and conversational UI, helps you understand what could be wrong if you or someone you care about is not feeling well. Just like any doctor’s appointment, it starts with your basic details, then does an assessment, in which it asks several personalized questions related to the symptoms, and then gives a report. The report consists of a summary, possible causes, and less-likely causes. It also allows you to share the report as a PDF. After training over several years using real world cases, Ada has become a handy health advisor. Its platform is powered by a sophisticated Artificial Intelligence engine combined with large medical knowledge base covering many thousands of conditions, symptoms and findings. In every medical assessment, Ada takes all of a patient’s information into account, including past medical history, symptoms, risk factors and more. Using machine learning and multiple closed feedback loops, Ada becomes more intelligent. Download App Store Google Play Store Plume Air Report - An air pollution monitor Source: Plume Labs Blog Industrialization and urbanization definitely comes with their side effects, the main being air pollution. It has become inevitable to keep yourself safe from the pollution, but now at least you can be aware of the air pollution levels in your area. Plume Air Report forecasts how air quality will evolve hour by hour over the next 24 hours similar to weather forecast. You can also easily compare the air quality between cities. It gives you insight on all pollutants (PM2.5, PM10, O3, NO2), with absolute concentration levels and your local air quality scale. It uses machine learning and atmospheric sciences to deliver real-time and hourly forecast air quality data. First, latest pollution levels is collected from over 12,000 monitoring stations and 80 public agencies around the world and then filtered for errors. Local atmospheric data (wind, temperature, atmosphere, etc.) is sourced to track their influence on pollution levels in your city. A team of data scientists analyzes local specifics such as geographical features and human activities. Finally, AI algorithms and atmospheric models are developed that turn this giant amount of data into hourly forecasts. Download App Store Google Play Store Aura - Mindfulness meets AI Source: Popular Science In this fast life, slow down a little and give yourself a time out with Aura. Aura is a new kind of mindfulness app that learns about you and simplifies your learning through guided meditations. It helps in reducing stress and increases positivity through 3-minute meditations, personalized by Artificial Intelligence. Aura is an intelligent app that leverages machine learning to give you a unique experience. After every exercise, you can rate your experience and Aura will learn how to provide more tailored meditations according to your needs. You can even track your mood and learn your mood patterns. Download App Store Google Play Store Replika - An emotive chatbot as a friend for life Source: Medium Want to be friends with someone who is always there to listen to you, talk to you, and never judges you? Then Replika is for you! It helps you make a real connection with an unreal friend. The idea of building Replika came from a very tragic background. The founder of the software company, Luka, Eugenia Kudya, lost her best friend in an accident in November 2015. She used to go through their messenger texts to bring back their memories. This is how she got this idea to develop a chatbot making it learn from the sample texts sent by her best friend. In her own words, “Most of the companies try to build an app that talks, but we tried to build an app that could listen well”. The chatbot uses neural network facilitating more natural one-on-one conversation with its user, and over time, learn how to speak like them. The source code is freely available for developers under the name CakeChat. It comes with a pre-trained model that you can use as is to run a chatbot that maintains a conversation in a certain emotional state. You can also build a variety of other conversational agents by using your own dataset, for example, persona-based model, emotional chatting machine, topic-centric model. To know more about the background and evolution of Replika, check out this amazing YouTube video. Download App Store Google Play Store Google Assistant - Your personal Google Source: Google Assistant When talking of AI-powered apps, voice assistants probably come first in your mind. Google Assistant makes your life easier and helps in organizing your day better. You can manage your little tasks, plan your day, enjoy entertainment, and get answers. It can also sync to your other devices including Google Home, smart TVs, laptops, and more. To give users smart assistance, Google Assistant relies on Artificial Intelligence technologies such as natural language processing, natural language understanding, and machine learning to understand what the user is saying, and to make suggestions or act on that language input. Download App Store Google Play Store Hound - Say it, Get it Source: Android Apps In an array of virtual assistants to choose from, Hound understands your voice commands better. You do not need to give “search query” like commands and can have a more natural conversation. Hound can be used for variety of tasks, some of them are: search, discover, and play music, set alarms, timers, and reminders, call, text, navigate hands-free, get the weather forecast. Hound’s speed and accuracy comes from their powerful Houndify platform. This platform combines Speech Recognition and Natural Language Understanding into a single step, which is called Speech-to-Meaning. Download App Store Google Play Store Picai - An app that picks filters for your pics, keeping you looking your best always Source: Google Play Store Picai with the help of Artificial Intelligence, recommends picture-perfect filters by analyzing the scene. It automatically analyzes the scene and with the help of object recognition detects the type of the object, for example, a plant, a girl, etc. It then uses a proprietary deep learning model to recommend two optimum filters from 100+ filters. What makes this app stand out is the split-screen filter selection, which makes the filter selection easier for the users. When using this app be warned of the picture quality and app size (76 MB), but it is definitely worth trying! Download Google Play Store Microsoft Pix - The pro photographer Source: MSPoweuser Named one of the 50 Best Apps of the Year by Time Magazine, Microsoft Pix helps you take better photos without the extra effort! It solves the problem of “not living in the moment”. It comes with some amazing features like, hyperlapse, live images Microsoft Pix Comix, artistic styles to transform your photos, smart settings that automatically checks scene and lighting between each shutter tap, and updates settings between each shot, and more. Microsoft Pix uses Artificial Intelligence to improve the image, such as cropping edges, enhancing color and tone, and sharpening focus. It includes enhanced deep-learning capabilities around image understanding. It captures a burst of 10 frames with each shutter click and uses AI to select three best shots. Before the remaining photos are deleted, it uses data from the entire burst to remove noise. These best, enhanced images are ready in about a second. The app also detects whether your eyes are open or not using the facial recognition technology. Download App Store ELSA - Your machine learning English teacher Source: TechCrunch ELSA (English Language Speech Assistant)  helps you in learning English and bettering your pronunciation every day. It provides you a curriculum tailored just, regular feedback, progress tracking, common phrases used in daily life. You can practice in a relaxed environment and improve your speaking skills to prepare for the TOEFL, IELTS, TOEIC ELSA coaches you in improving your English pronunciations by using speech recognition, deep learning, and Artificial Intelligence. Download App Store Google Play Store Socratic - Homework in a snap Source: Google Play Store Socratic is your new helper, apart from your parents, in completing those complex Math problems. You just need to take a photo of your homework and can get explanations, videos, step-by-step help, instantly. Also, these resources are jargon-free, helping you understand the concepts better. It supports all subjects including Math (Algebra, Calculus, Statistics, Graphing, etc), Science, Chemistry, History, English, Economics, and more. Socratic uses Artificial Intelligence to figure out the concepts you need to learn in order to answer it. For this it combines cutting-edge computer vision technologies, which read questions from images, with machine learning classifiers. These classifiers are built using millions of sample homework questions, to accurately predict which concepts will help you solve your question. Download App Store Google Play Store Recent News - Stay informed Source: Recent News Recent News is an app that will provide you customized news. Some of the features that it comes with to give you the daily dose of news include one-minute news summary with very quick load time, hot news, local news, and personalized recommendations, instantly share news on Facebook, Twitter, and other social networks, and many more. It uses Artificial Intelligence to learn about your interests, suggest relevant articles, and propose topics you might like to follow. So, the more you use it the better it becomes! The app is surely innovative and saves time, but I do wish the developers applied some innovation in the app’s name as well :P Download App Store Google Play Store And that’s the end of my list. People say, “Smartphones and apps are becoming smarter, and we are becoming dumber”. But I would like to say that these apps, with the right usage, empower us to become smarter. Agree? 7 Popular Applications of Artificial Intelligence in Healthcare 5 examples of Artificial Intelligence in Web apps What Should We Watch Tonight? Ask a Robot, says Matt Jones from OVO Mobile [Interview]
Read more
  • 0
  • 0
  • 24606

article-image-why-learn-machine-learning-as-a-non-techie
Natasha Mathur
11 Sep 2018
9 min read
Save for later

Why learn machine learning as a non-techie?

Natasha Mathur
11 Sep 2018
9 min read
“..what we want is a machine that can learn from experience..” ~Alan Turing, 1947 Thanks to artificial intelligence, Turing’s vision is coming true. Machines are learning, from others’ experience (using training datasets) and from their own as well.  Machines can now play chess, Go, and other games, they can help predict cancer, manage your day, summarize today’s news for you, edit your essays, identify your face, and even mimic dance moves and facial expressions. Come to think of it, every job role and career demands that you learn from experience, improve over time and explore new ways to do things.  Yes, machines are very effective at the former two, but humans still have an edge when it comes to innovative thinking. Imagine what you could achieve if you put together your mind with that of an efficient learning algorithm! You might think that artificial intelligence and machine learning are a dense and impenetrable field limited to research labs and textbooks. Does that mean only software engineers and researchers can dream of making it into this fascinating field? Not quite. We’ll unpick machine learning in the following sections and present our case for why it makes sense for everyone to understand this field better. Machine learning is, potentially, a first-class ticket to an exciting career, whether you are starting off fresh from college or are considering a career switch. Beyond the artificial intelligence and machine learning hype Artificial intelligence is simply an area of computing that solves complex real-world problems. Yes, research still happens in universities, and yes, data scientists are still exploring the limits of artificial intelligence in forward-thinking businesses, but it's much more than that. AI is so pervasive - and mysterious - that its applications hide in plain sight. Look around you carefully. From Netflix recommending personalized content to its 130 million viewers, to Youtube’s video search and automatic captions in videos, to Amazon’s shopping recommendations, to Instagram hashtags, Snapchat filters, spam filters on your Gmail and virtual assistants like Siri on our smartphones, artificial intelligence, and machine learning techniques are in action everywhere. This means as a user you are at some level already impacted by algorithms every day. The question then is should you be the person who’s career is limited by algorithms or the one whose career is propelled by algorithms. Why get into artificial intelligence development as a non-programmer? Artificial Intelligence is a perfect blend of knowledge, high salary, and some really great opportunities. Your non-programming field does not have to deter your growth in the AI field. In fact, your background can give you an edge over the traditional software developers and data scientists in terms of domain awareness and better understanding what the system should do, what it should look for, and make the users feel. Below are some reasons proving why you should make the jump in AI. Machine learning can help you be better at your current job How? You may ask. Take a news reporter or editor’s job for example. They must possess a blend of research/analysis centric capabilities, a creative set of skills and speed to come up with timely, quality articles on topics of interest to their readers. A data journalist or a writer with machine learning experience could quickly find great topics to write on with the help of machine learning based web scraping apps. Also, they could let the data lead them to unique stories that are emerging before traditional news reporters find their way to them. They could further also get a quick summary of multiple perspectives on a given topic using custom-built news feed algorithms. Then could they also find further research resources by tweaking their search parameters, even adding quality filters on top to only allow for high-quality citations. This kind of writer has cut down on the time they spent finding and understanding topics - which means more time to actually write compelling pieces and to connect with real sources for further insight. Algorithms can also find and correct language issues in writing now. This means editors can spend more time improving the content quality from a scope perspective. You can quickly start to see how artificial intelligence can complement the work you do and help you grow in your career. Yes, all this sounds lovely in theory, but is it really happening in practice? There are others like you who are successfully exploring machine learning Don’t believe me? Mason Fish, a software Engineer at Docker, Inc was earlier a musician. He had done his bachelor’s and masters from two different music conservatories. After graduating, he worked for five years as a professional musician. But, today he helps build and maintain services for Docker, a tool used by software engineers all over the world! This was just one case of a non-programmer diving into the computer science world. When musicians can learn to code and get core developer jobs in cutting-edge tech companies, it is not far fetched to say they can also learn to build machine learning models. Below are some examples of non-programmers of varied experience levels who are exploring the Machine Learning world. Per Harald Borgen, an economics graduate was able to boost the sales at his workplace Xeneta using machine learning algorithms, an accomplishment that helped accelerate his career. You can read his blog to see how he transformed from a machine learning newbie to a seasoned practitioner. Another example is a 14-year-old Tanmay Bakshi, who started a youtube channel at just 7 years of age where he teaches coding, algorithms, AI and machine learning concepts. Similarly, Sean Le Van created an AI chatbot when he was 14 years old using ML algorithms.   Rosebud Anwuri is another great example as she switched from chemical engineering to Data science. “My first exposure to Data Science was from a book that had nothing to do with Data Science,” writes Anwuri on her blog. She created her first Data Science learning path from an answer on Quora, last year. Fast forward to this year, she has been invited to speak at Stanford’s Women in Data Science Conference in Nigeria and has facilitated a workshop at The Women in Machine Learning and Data Science among others. She also writes on Machine Learning and Data Science on her blog.   Like Anwuri, Sce Pike dreamed of being an artist or singer in college and did her major in fine arts and anthropology. Pike went from art to web design to “human factors design,” which involves human-machine interactions, for the telecommunications giant Qualcomm. In addition to that, Pike started her own company IOTAS, that offers smart-home services to renters and homeowners. “I have had to approach my work with logic, research, and great design. Looking back, I’m amazed where I am now,” says Sce Pike. Read also: Data science for non-techies: How I got started (Part 1) Adapt or perish in the oncoming job automation wave of the fourth industrial revolution Ok, so maybe you’re happy with how you are growing anyway in your career. Be warned though, your job may not look the same even in the next few years. Automation is expected to replace up to 30% of jobs in the next 10 years, so upskilling to machine learning is a wise choice. Last month, Bank of England’s Chief Economist warned that 15 million jobs in Britain could be at stake because of artificial intelligence. Machine learning as a skill could help you stay relevant in the future and prepare for what’s being called, “the third machine age”. You can develop machine learning apps with no to minimal coding experience Thanks to great advancements by big tech companies and open source projects, machine learning today is accessible to people with varying degrees of programming experience - from new developers and even those who have never written a line of code in their life. So, whether you’re a curious web/UX designer, a news reporter, an artist, a school student, a filmmaker or an NGO worker, you will find good use of machine learning in your field. There are tools for machine learning for users with varying levels of experience. In fact, there are certain Machine Learning Applications that you can build even today. Some examples are Image and text classification with Neural Network, Facial recognition, Gaming bots, music generation, object detection, etc. Machine learning skills are highly rewarded Machine learning is a nascent field where demand far outweighs supply. According to research done by Indeed.com, the number one job requirement in AI is that of a Machine Learning Engineer, with data scientist jobs taking the second spot. In fact, AI researchers can earn more than 1 million dollar per year and the AI geniuses at Elon Musk’s OpenAI are a living proof for this. OpenAI paid its top AI researcher, Ilya Sutskever, more than  $1.9 million, back in 2016. Another leading researcher, Ian Goodfellow, in OpenAI was paid more than $800,000. Machine Learning is not hard to learn. It might seem intimidating at first, but once you get the basics right, the rest of the ML journey becomes easier. If you’re convinced that ML is for you, but are confused about how to get started then don’t worry, we’ve got you covered. To help you get started, here is a non-programmer’s guide to learning Machine Learning. So, yes, it doesn’t matter if you’re a non-programmer, musician, a librarian, or a student, the future is AI-driven so don’t be afraid to make that dive into Machine Learning. As Robert Frost said, “Two roads diverged in a wood, and I took the one less traveled by, And that has made all the difference”. 8 Machine learning best practices [Tutorial] Google introduces Machine Learning courses for AI beginners Top languages for Artificial Intelligence development
Read more
  • 0
  • 0
  • 24278

article-image-grover-a-gan-that-fights-neural-fake-news-as-long-as-it-creates-said-news
Vincy Davis
11 Jun 2019
7 min read
Save for later

GROVER: A GAN that fights neural fake news, as long as it creates said news

Vincy Davis
11 Jun 2019
7 min read
Last month, a team of researchers from the University of Washington and the Allen Institute for Artificial Intelligence, published a paper titled ‘Defending Against Neural Fake News’. The goal of this paper is to reliably detect “neural fake news”, so that its harm can be minimized. With this regard, the researchers have built a model named ‘GROVER’. This works as a generator of fake news, which can also spot its own generated fake news articles, as well as those generated by other AI models. GROVER (Generating aRticles by Only Viewing mEtadata Records) models can generate an efficient yet controllable news article, with not only the body, but also the title, news source, publication date, and author list. The researchers affirm that the ‘best models for generating neural disinformation are also the best models at detecting it’. The framework for GROVER represents fake news generation and detection as an adversarial game: Adversary This system will generate fake stories that match specified attributes: generally, being viral or persuasive. The stories must be realistic to read for both human users as well as the verifier. Verifier This system will classify news stories as real or fake. A verifier will have access to unlimited real news stories and few fake news stories from a specific adversary. The dual objective of these two systems suggest an escalating ‘arms race’ between attackers and defenders. It is expected that as the verification systems get better, the adversaries too will follow. Modeling Conditional Generation of Neural Fake News using GROVER GROVER adopts a language modeling framework which allows for flexible decomposition of an article in the order of p(domain, date, authors, headline, body). During inference time, a set of fields are set as ‘F’ for context, with each field ‘f ‘ containing field-specific start and end tokens. During training, the inference is simulated by randomly partitioning an article’s fields into two disjoint sets F1 and F2. The researchers also randomly drop out individual fields with probability 10%, and drop out all but the body with probability 35%. This allows the model to learn how to perform unconditional generation. For Language Modeling, two evaluation modes are considered: unconditional, where no context is provided and the model must generate the article body; and conditional, in which the full metadata is provided as context. The researchers evaluate the quality of disinformation generated by their largest model, GROVER-Mega, using p=.96. The articles are classified into four classes: human-written articles from reputable news websites (Human News), GROVER-written articles conditioned on the same metadata (Machine News), human-written articles from known propaganda websites (Human Propaganda), and GROVER-written articles conditioned on the propaganda metadata (Machine Propaganda). Image Source: Defending Against Neural Fake News When rated by qualified workers on Amazon Mechanical Turk, it was found that though the quality of GROVER-written news is not as high as human-written news, it is very skilled at rewriting propaganda. The overall trustworthiness score of propaganda increases from 2.19 to 2.42 (out of 3) when rewritten by GROVER. Neural Fake News Detection using GROVER The role of the Verifier is to mitigate the harm of neural fake news by classifying articles as Human or Machine written. The neural fake news detection is framed in a semi-supervised method. The neural verifier (or discriminator) will have access to many human-written news articles from March 2019 and before, i.e., the entire RealNews training set. However, it will   have limited access to generations, and more recent news articles. For example, using 10k news articles from April 2019, for generating article body text; another 10k articles are used as a set of human-written news articles, it is split in a balanced way, with 10k for training, 2k for validation, and 8k for testing. It is evaluated using two modes: In the unpaired setting, a verifier is provided single news articles, which must be classified independently as Human or Machine.  In the paired setting, a model is given two news articles with the same metadata, one real and one machine-generated. The verifier must assign the machine-written article a higher Machine probability than the human-written article. Both the modes are evaluated in terms of accuracy. Image Source: Defending Against Neural Fake News It was found that the paired setting appears significantly easier than the unpaired setting across the board, suggesting that it is often difficult for the model to calibrate its predictions. Second, model size is highly important in the arms race between generators and discriminators. Using GROVER to discriminate GROVER’s generations results in roughly 90% accuracy across the range of sizes. If a larger generator is used, accuracy slips below 81%; conversely, if the discriminator is larger, accuracy is above 98%. Lastly, other discriminators perform worse than GROVER overall. This suggests that effective discrimination requires having a similar inductive bias, as the generator. Thus it has been found that GROVER can rewrite propaganda articles, with humans rating the rewritten versions as more trustworthy. At the same time, GROVER can also defend these models. The researchers are of the opinion that an ensemble of deep generative model, such as GROVER should be used to analyze the content of a text. Obviously the working of the GROVER model has caught many people’s attention. https://twitter.com/str_t5/status/1137108356588605440 https://twitter.com/currencyat/status/1137420508092391424 While some are finding this to be an interesting mechanism to combat fake news, others point out that, it doesn't matter if GROVER can identify its own texts, if it can't identify the texts generated by other models. Releasing a model like GROVERcan turn out to be extremely irresponsible rather than defensive. A user on Reddit says that “These techniques for detecting fake news are fundamentally misguided. You cannot just train a statistical model on a bunch of news messages and expect it to be useful in detecting fake news. The reason for this should be obvious: there is no real information about the label ('fake' vs 'real' news) encoded in the data. Whether or not a piece of news is fake or real depends on the state of the external world, which is simply not present in the data. The label is practically independent of the data.” Another user on Hacker News comments that “Generative neural networks these days are both fascinating and depressing - feels like we're finally tapping into how subsets of human thinking & creativity work. But that knocks us off our pedestal, and threatens to make even the creative tasks we thought were strictly a human specialty irrelevant; I know we're a long way off from generalized AI, but we seem to be making rapid progress, and I'm not sure society's mature enough or ready for it. Especially if the cutting edge tools are in the service of AdTech and such, endlessly optimizing how to absorb everybody's spare attention. Perhaps there's some bright future where we all just relax and computers and robots take care of everything for us, but can't help feeling like some part of the human spirit is dying.” Few users feel that this ‘generating and detecting its own fake news’, kind of model is going to be unnecessary in the future. It’s just a matter of time that the text written by algorithms will be exactly similar to a human written text. At that point, there will be no way to distinguish between such articles. A user suggests that “I think to combat fake news, especially algorithmic one, we'll need to innovate around authentication mechanism that can effectively prove who you are and how much effort you put into writing something. Digital signatures or things like that.” For more details about the GROVER model, head over to the research paper. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence
Read more
  • 0
  • 0
  • 23797

article-image-4-transformations-ai-powered-ecommerce
Savia Lobo
23 Nov 2017
5 min read
Save for later

Through the customer's eyes: 4 ways Artificial Intelligence is transforming ecommerce

Savia Lobo
23 Nov 2017
5 min read
We have come a long way from what ecommerce looked like two decades ago. From a non-existent entity, it has grown into a world-devouring business model that is a real threat to the traditional retail industry. It has moved from a basic static web page with limited product listings to a full grown virtual marketplace where anyone can buy or sell anything from anywhere at anytime at the click of a button. At the heart of this transformation are two things: customer experience and technology. This is what Jeff Bezos, founder & CEO of Amazon, one of the world’s largest ecommerce sites believes: “We see our customers as invited guests to a party, and we are the hosts. It's our job every day to make every important aspect of the customer experience a little bit better.” Now with the advent of AI, the retail space especially e-commerce is undergoing another major transformation that will redefine customer experiences and thereby once again change the dynamics of the industry. So, how is AI-powered ecommerce actually changing the way shoppers shop? AI-powered ecommerce makes search easy, accessible and intuitive Looking for something? Type it! Say it!...Searching for a product you can’t name? No worries. Just show a picture. "A lot of the future of search is going to be about pictures instead of keywords." - Ben Silbermann, CEO of Pinterest We take that statement with a pinch of salt. But we are reasonably confident that a lot of product search is going to be non-text based. Though text searches are common, voice and image searches in e-commerce are now gaining traction. AI makes it possible for the customer to move beyond simple text-based product search and search more easily and intuitively through voice and visual product searches. This also makes search more accessible. It uses Natural Language Processing to understand the customer’s natural language, be it in text or speech to provide more relevant search results. Visual product searches are made possible through a combination of computer vision, image recognition, and reverse image search algorithms.   Amazon Echo, a home-automated speaker has a voice assistant Alexa that helps customers to buy products online by having simple conversations with Alexa. Slyce, uses a visual search feature, wherein the customer can scan a barcode, a catalog, and even a real image; just like Amazon’s in-app visual feature. Clarifai helps developers to build applications that detect images and videos and searches related content. AI-powered ecommerce makes personalized product recommendations   When you search for a product, the AI underneath recommends further options based on your search history or depending on what other users who have similar tastes found interesting. Recommendations engines employ one or a combination of the three types of recommendation algorithms: content-based filtering, collaborative filtering, and complementary products. The relevance and accuracy of the results produced depend on various factors such as the type of recommendation engine used, the quantity and quality of data used to train the system, the data storage and retrieval strategies used amongst others. For instance, Amazon uses DSSTNE (Deep Scalable Sparse Tensor Network Engine, pronounced as Destiny) to make customized product recommendations to their customers. The customer data collected and stored is used by DSSTNE to train and generate predictions for customers. The data processing itself takes place on CPU clusters whereas the training and predictions take place on GPUs to ensure speed and scalability. Virtual Assistants as your personal shopping assistants   Now, what if we said you can have all the benefits we have discussed above without having to do a lot of work yourself? In other words, what if you had a personal shopping assistant who knows your preferences, handles all the boring aspects of shopping (searching, comparing prices, going through customers reviews, tracking orders etc.) and brought you products that were just right with the best deals? Mona, one such personal shopper, can do all of the above and more. It uses a combination of artificial intelligence and big data to do this. Virtual assistants can either be fully AI driven or a combination of AI-human collaboration. Chatbots also assist shoppers but within a more limited scope. They can help resolve customer queries with zero downtime and also assist in simple tasks such as notify the customer of price changes, place and track orders etc. Dominos has a facebook messenger Bot that enables customers to order food. Metail, an AI-powered ecommerce website, take in your body measurements. With this, you can actually see how a clothing would look on you. Botpress helps developers to build their own chatbots consuming lesser time. Maximizing CLV (customer lifetime value) with AI-powered CRM AI-powered ecommerce in CRM aims to help businesses predict CLV and sell the right product to the right customer at the right time, every time leveraging the machine learning and predictive capabilities of AI. It also helps businesses provide the right level of customer service and engagement. In other words, by combining the predictive capabilities and automated 1-1 personalization, an AI backed CRM can maximize CLV for every customer!    Salesforce Einstein, IBM Watson are some of the frontrunners in this space. IBM Watson, with its cognitive touch, helps ecommerce sites analyze their mountain of customer data and glean useful insights to predict a lot of things like what customers are looking for, the brands that are popular, and so on.  It can also help with dynamic pricing of products by predicting when to discount and when to increase the price based on analyzing demand and competitions’ pricing tactics. It is clear that AI not only has the potential to transform e-commerce as we know it but that it has already become central to the way leading ecommerce platforms such as Amazon are functioning. Intelligent e-commerce is here and now. The near future of ecommerce is omnicommerce driven by the marriage between AI and robotics to usher in the ultimate customer experience - one that is beyond our current imagination.
Read more
  • 0
  • 0
  • 23729
article-image-predictive-analytics-with-amazon-ml
Natasha Mathur
09 Aug 2018
9 min read
Save for later

Predictive Analytics with AWS: A quick look at Amazon ML

Natasha Mathur
09 Aug 2018
9 min read
As artificial intelligence and big data have become a ubiquitous part of our everyday lives, cloud-based machine learning services are part of a rising billion-dollar industry. Among the several services currently available in the market, Amazon Machine Learning stands out for its simplicity. In this article, we will look at Amazon Machine Learning, MLaaS, and other related concepts. This article is an excerpt taken from the book 'Effective Amazon Machine Learning' written by Alexis Perrier. Machine Learning as a Service Amazon Machine Learning is an online service by Amazon Web Services (AWS) that does supervised learning for predictive analytics. Launched in April 2015 at the AWS Summit, Amazon ML joins a growing list of cloud-based machine learning services, such as Microsoft Azure, Google prediction, IBM Watson, Prediction IO, BigML, and many others. These online machine learning services form an offer commonly referred to as Machine Learning as a Service or MLaaS following a similar denomination pattern of other cloud-based services such as SaaS, PaaS, and IaaS respectively for Software, Platform, or Infrastructure as a Service. Studies show that MLaaS is a potentially big business trend. ABI Research, a business intelligence consultancy, estimates machine learning-based data analytics tools and services revenues to hit nearly $20 billion in 2021 as MLaaS services take off as outlined in this business report  Eugenio Pasqua, a Research Analyst at ABI Research, said the following: "The emergence of the Machine-Learning-as-a-Service (MLaaS) model is good news for the market, as it cuts down the complexity and time required to implement machine learning and thus opens the doors to an increase in its adoption level, especially in the small-to-medium business sector." The increased accessibility is a direct result of using an API-based infrastructure to build machine-learning models instead of developing applications from scratch. Offering efficient predictive analytics models without the need to code, host, and maintain complex code bases lowers the bar and makes ML available to smaller businesses and institutions. Amazon ML takes this democratization approach further than the other actors in the field by significantly simplifying the predictive analytics process and its implementation. This simplification revolves around four design decisions that are embedded in the platform: A limited set of tasks: binary classification, multi-classification, and regression A single linear algorithm A limited choice of metrics to assess the quality of the prediction A simple set of tuning parameters for the underlying predictive algorithm That somewhat constrained environment is simple enough while addressing most predictive analytics problems relevant to business. It can be leveraged across an array of different industries and use cases. Let's see how! Leveraging full AWS integration The AWS data ecosystem of pipelines, storage, environments, and Artificial Intelligence (AI) is also a strong argument in favor of choosing Amazon ML as a business platform for its predictive analytics needs. Although Amazon ML is simple, the service evolves to greater complexity and more powerful features once it is integrated into a larger structure of AWS data related services. AWS is already a major factor in cloud computing. Here's what an excerpt from The Economist, August  2016 has to say about AWS (http://www.economist.com/news/business/21705849-how-open-source-software-and-cloud-computing-have-set-up-it-industry): AWS shows no sign of slowing its progress towards full dominance of cloud computing's wide skies. It has ten times as much computing capacity as the next 14 cloud providers combined, according to Gartner, a consulting firm. AWS's sales in the past quarter were about three times the size of its closest competitor, Microsoft's Azure. This gives an edge to Amazon ML, as many companies that are using cloud services are likely to be already using AWS. Adding simple and efficient machine learning tools to the product offering mix anticipates the rise of predictive analytics features as a standard component of web services. Seamless integration with other AWS services is a strong argument in favor of using Amazon ML despite its apparent simplicity. The following architecture is a case study taken from an AWS January 2016 white paper titled Big Data Analytics Options on AWS (http://d0.awsstatic.com/whitepapers/Big_Data_Analytics_Options_on_AWS.pdf), showing a potential AWS architecture for sentiment analysis on social media. It shows how Amazon ML can be part of a more complex architecture of AWS services: Comparing performances in Amazon ML services Keeping systems and applications simple is always difficult, but often worth it for the business. Examples abound with overloaded UIs bringing down the user experience, while products with simple, elegant interfaces and minimal features enjoy widespread popularity. The Keep It Simple mantra is even more difficult to adhere to in a context such as predictive analytics where performance is key. This is the challenge Amazon took on with its Amazon ML service. A typical predictive analytics project is a sequence of complex operations: getting the data, cleaning the data, selecting, optimizing and validating a model and finally making predictions. In the scripting approach, data scientists develop codebases using machine learning libraries such as the Python scikit-learn library or R packages to handle all these steps from data gathering to predictions in production. As a developer breaks down the necessary steps into modules for maintainability and testability, Amazon ML breaks down a predictive analytics project into different entities: datasource, model, evaluation, and predictions. It's the simplicity of each of these steps that makes AWS so powerful to implement successful predictive analytics projects. Engineering data versus model variety Having a large choice of algorithms for your predictions is always a good thing, but at the end of the day, domain knowledge and the ability to extract meaningful features from clean data is often what wins the game. Kaggle is a well-known platform for predictive analytics competitions, where the best data scientists across the world compete to make predictions on complex datasets. In these predictive competitions, gaining a few decimals on your prediction score is what makes the difference between earning the prize or being just an extra line on the public leaderboard among thousands of other competitors. One thing Kagglers quickly learn is that choosing and tuning the model is only half the battle. Feature extraction or how to extract relevant predictors from the dataset is often the key to winning the competition. In real life, when working on business-related problems, the quality of the data processing phase and the ability to extract meaningful signal out of raw data is the most important and time-consuming part of building an effective predictive model. It is well known that "data preparation accounts for about 80% of the work of data scientists" (http://www.forbes.com/sites/gilpress/2016/03/23/data-preparation-most-time-consuming-least-enjoyable-data-science-task-survey-says/). Model selection and algorithm optimization remains an important part of the work but is often not the deciding factor when the implementation is concerned. A solid and robust implementation that is easy to maintain and connects to your ecosystem seamlessly is often preferred to an overly complex model developed and coded in-house, especially when the scripted model only produces small gains when compared to a service-based implementation. Amazon's expertise and the gradient descent algorithm Amazon has been using machine learning for the retail side of its business and has built a serious expertise in predictive analytics. This expertise translates into the choice of algorithm powering the Amazon ML service. The Stochastic Gradient Descent (SGD) algorithm is the algorithm powering Amazon ML linear models and is ultimately responsible for the accuracy of the predictions generated by the service. The SGD algorithm is one of the most robust, resilient, and optimized algorithms. It has been used in many diverse environments, from signal processing to deep learning and for a wide variety of problems, since the 1960s with great success. The SGD has also given rise to many highly efficient variants adapted to a wide variety of data contexts. We will come back to this important algorithm in a later chapter; suffice it to say at this point that the SGD algorithm is the Swiss army knife of all possible predictive analytics algorithm. Several benchmarks and tests of the Amazon ML service can be found across the web (Amazon, Google, and Azure: https://blog.onliquid.com/machine-learning-services-2/ and Amazon versus scikit-learn: http://lenguyenthedat.com/minimal-data-science-2-avazu/). Overall results show that the Amazon ML performance is on a par with other MLaaS platforms, but also with scripted solutions based on popular machine learning libraries such as scikit-learn. For a given problem in a specific context and with an available dataset and a particular choice of a scoring metric, it is probably possible to code a predictive model using an adequate library and obtain better performances than the ones obtained with Amazon ML. But what Amazon ML offers is stability, an absence of coding, and a very solid benchmark record, as well as a seamless integration with the Amazon Web Services ecosystem that already powers a large portion of the Internet. Amazon ML service pricing strategy As with other MLaaS providers and AWS services, Amazon ML only charges for what you consume. The cost is broken down into the following: An hourly rate for the computing time used to build predictive models A prediction fee per thousand prediction samples And in the context of real-time (streaming) predictions, a fee based on the memory allocated upfront for the model The computational time increases as a function of the following: The complexity of the model The size of the input data The number of attributes The number and types of transformations applied At the time of writing, these charges are as follows: $0.42 per hour for data analysis and model building fees $0.10 per 1,000 predictions for batch predictions $0.0001 per prediction for real-time predictions $0.001 per hour for each 10 MB of memory provisioned for your model These prices do not include fees related to the data storage (S3, Redshift, or RDS), which are charged separately. During the creation of your model, Amazon ML gives you a cost estimation based on the data source that has been selected. The Amazon ML service is not part of the AWS free tier, a 12-month offer applicable to certain AWS services for free under certain conditions. To summarize, we presented a simple introduction to the Amazon ML service. Amazon ML is built on a solid ground, with a simple yet very efficient algorithm driving its predictions. If you found this post useful, be sure to check out the book  'Effective Amazon Machine Learning' to learn about predictive analytics and other concepts in AWS machine learning. Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developers AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer
Read more
  • 0
  • 0
  • 23599

article-image-teaching-ai-ethics-trick-or-treat
Natasha Mathur
31 Oct 2018
5 min read
Save for later

Teaching AI ethics - Trick or Treat?

Natasha Mathur
31 Oct 2018
5 min read
The Public Voice Coalition announced Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018, last week. “The rise of AI decision-making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real-life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them. We propose these Universal Guidelines to inform and improve the design and use of AI”, reads the EPIC’s guideline page. Artificial Intelligence ethics aim to improve the design and use of AI, as well as to minimize the risk for society, as well as ensures the protection of human rights. AI ethics focuses on values such as transparency, fairness, reliability, validity, accountability, accuracy, and public safety. Why teach AI ethics? Without AI ethics, the wonders of AI can convert into the dangers of AI, posing strong threats to society and even human lives. One such example is when earlier this year, an autonomous Uber car, a 2017 Volvo SUV traveling at roughly 40 miles an hour, killed a woman in the street in Arizona. This incident brings out the challenges and nuances of building an AI system with the right set of values embedded in them. As different factors are considered for an algorithm to reach the required set of outcomes, it is more than possible that these criteria are not always shared transparently with the users and authorities. Other non-life threatening but still dangerous examples include the time when Google Allo, responded with a turban emoji on being asked to suggest three emoji responses to a gun emoji, and when Microsoft’s Twitter bot Tay, who tweeted racist and sexist comments. AI scientists should be taught at the early stages itself that they these values are meant to be at the forefront when deciding on factors such as the design, logic, techniques, and outcome of an AI project. Universities and organizations promoting learning about AI ethics What’s encouraging is that organizations and universities are taking steps (slowly but surely) to promote the importance of teaching ethics to students and employees working with AI or machine learning systems. For instance, The World Economic Forum Global Future Councils on Artificial Intelligence and Robotics has come out with “Teaching AI ethics” project that includes creating a repository of actionable and useful materials for faculties wishing to add social inquiry and discourse into their AI coursework. This is a great opportunity as the project connects professors from around the world and offers them a platform to share, learn and customize their curriculum to include a focus on AI ethics. Cornell, Harvard, MIT, Stanford, and the University of Texas are some of the universities that recently introduced courses on ethics when designing autonomous and intelligent systems. These courses put an emphasis on the AI’s ethical, legal, and policy implications along with teaching them about dealing with challenges such as biased data sets in AI. Mozilla has taken initiative to make people more aware of the social implications of AI in our society through its Mozilla’s Creative Media Awards. “We’re seeking projects that explore artificial intelligence and machine learning. In a world where biased algorithms, skewed data sets, and broken recommendation engines can radicalize YouTube users, promote racism, and spread fake news, it’s more important than ever to support artwork and advocacy work that educates and engages internet users”, reads the Mozilla awards page. Moreover, Mozilla also announced a $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates. Other examples include Google’s AI ethics principles announced back in June, to abide by when developing AI projects, and SAP’s AI ethics guidelines and an advisory panel created last month. SAP says that they have designed these guidelines as it “considers the ethical use of data a core value. We want to create software that enables intelligent enterprise and actually improves people’s lives. Such principles will serve as the basis to make AI a technology that augments human talent”. Other organizations, like Drivendata have come out with tools like Deon, a handy tool that helps data scientists add an ethics checklist to your data science projects, making sure that all projects are designed keeping ethics at the center. Some, however, feel that having to explain how an AI system reached a particular outcome (in the name of transparency) can put a damper on its capabilities. For instance, according to David Weinberger, a senior researcher at the Harvard Berkman Klein Center for Internet & society, “demanding explicability sounds fine, but achieving it may require making artificial intelligence artificially stupid”. Teaching AI ethics- trick or treat? AI has transformed the world as we know it. It has taken over different spheres of our lives and made things much simpler for us. However, to make sure that AI continues to deliver its transformative and evolutionary benefits effectively, we need ethics. From governments to tech organizations to young data scientists, everyone must use this tech responsibly. Having AI ethics in place is an integral part of the AI development process and will shape a healthy future of robotics and artificial intelligence. That is why teaching AI ethics is a sure-shot treat. It is a TREAT that will boost the productivity of humans in AI, and help build a better tomorrow.
Read more
  • 0
  • 0
  • 23474

article-image-ai-deserve-to-be-so-overhyped
Aaron Lazar
28 May 2018
6 min read
Save for later

Does AI deserve to be so Overhyped?

Aaron Lazar
28 May 2018
6 min read
The short answer is yes, and no. The long answer is, well, read on to find out. Several have been asking the question, including myself, wondering whether Artificial Intelligence is just another passing fad like maybe the Google Glass or nano technology. The hype for AI began over the past few years, although if you actually look back at the 60’s it seems to have started way back then. In the early 90s and all the way down to the early 2000’s, a lot of media and television shows were talking about AI quite a bit. Going 25 centuries even further back, Aristotle speaks of not just thinking machines but goes on to talk of autonomous ones in his book, Politics: for if every instrument, at command, or from a preconception of its master's will, could accomplish its work (as the story goes of the statues of Daedalus; or what the poet tells us of the tripods of Vulcan, "that they moved of their own accord into the assembly of the gods "), the shuttle would then weave, and the lyre play of itself; nor would the architect want servants, or the [1254a] master slaves. Aristotle, Politics: A treatise on Government, Book 1, Chapter 4 This imagery of AI has managed to sink into our subconscious minds over the centuries propelling creative work, academic research and industrial revolutions toward that goal. The thought of giving machines a mind of their own, existed quite long ago, but recent advancements in technology have made it much clearer and realistic. The Rise of the Machines The year is 2018. The 4th Industrial Revolution is happening and intelligent automation has taken over. This is the point where I say no, AI is not overhyped. General Electric, for example, is a billion dollar manufacturing company that has already invested in AI. GE Digital has AI systems running through several automated systems. They even have their own IIoT platform called Predix. Similarly, in the field of healthcare, the implementation of AI is growing in leaps and bounds. The Google Deepmind project is able to process millions of medical records within minutes. Although this kind of research is in its early phase, Google is working closely with the Moorfields Eye Hospital NHS Foundation Trust to implement AI and improve eye treatment. AI startups focused on healthcare and other allied areas such as genetic engineering are some of the highly invested and venture capital supported ones in recent times. Computer Vision or image recognition is one field where AI has really proven its power. Analysing datasets like iris has never been easier, paving way for more advanced use cases like automated quality checks in manufacturing units. Another interesting field is Healthcare, where AI has helped sift through tonnes of data, helping doctors diagnose illnesses quicker, manufacture more effective and responsive drugs, and in patient monitoring. The list is endless, clearly showing that AI has made its mark in several industries. Back (up) to the Future Now, if you talk about the commercial implementations of AI, they’re still quite far fetched at the moment. Take the same Computer Vision application for example. Its implementation will be a huge breakthrough in autonomous vehicles. But if researchers have managed to obtain an 80% accuracy for object recognition on roads, the battle is not close to being won! Even if they do improve, do you think driverless vehicles are ready to drive in the snow, through the rain or even storms? I remember a few years ago, Business Process Outsourcing was one industry, at least in India, that was quite fearful of the entry of AI and autonomous systems that might take over their jobs. Machines are only capable of performing 60-70% of the BPO processes in Insurance, and with changing customer requirements and simultaneously falling patience levels, these numbers are terrible! It looks like the end of Moore’s law is here, for AI I mean. Well, you can’t really expect AI to have the same exponential growth that computers did, decades ago. There are a lot of unmet expectations in several fields, which has a considerable number of people thinking that AI isn’t going to solve their problems now, and they’re right. It is probably going to take a few more years to mature, making it a thing of the future, not of the present. Is AI overhyped now? Yeah, maybe? What I think Someone once said, hype is a double-edged sword. If it’s not enough, innovation may become obscure and if it’s too much, expectations will become unreasonable. It’s true that AI has several beneficial use cases, but what about fairness of such systems? Will machines continue to think the way they’re supposed to or will they start finding their own missions that don’t involve benefits to the human race? At the same time, there’s also a question of security and data privacy. GDPR will come into effect in a few days, but what about the prevailing issues of internet security? I had an interesting discussion with a colleague yesterday. We were talking about what the impact of AI could be for us as end-customers, in a developing and young country like India. Do we really need to fear losing our jobs, will we be able to reap the benefits of AI directly or would it be an indirect impact? The answer is, probably yes, but not so soon. If we drew up the hierarchy of needs pyramid for AI, it would look something like the above. For each field to fully leverage AI, it’s going to involve several stages like collecting data, storing it effectively, exploring it, then aggregating it, optimising it with the help of algorithms and then finally achieving AI. That’s bound to take a LOT of time! Honestly speaking, a country like India lacks as much implementation of AI in several fields. The major customers of AI, apart from some industrial giants, will obviously be the government. Although, that is sure to take at least a decade or so, keeping in mind the several aspects to be accomplished first. In the meantime, buddying AI developers and engineers are scurrying to skill themselves up in the race to be in the cream of the crowd! Similarly, what about the rest of the world? Well, I can’t speak for everyone, but if you ask me, AI is a really promising technology and I think we need to give it some time; allow the industries and organisations investing in it to take enough time to let it evolve and ultimately benefit us customers, one way or another. You can now make music with AI thanks to Magenta.js Splunk leverages AI in its monitoring tools    
Read more
  • 0
  • 0
  • 23370
article-image-unity-machine-learning-agents-transforming-games-with-artificial-intelligence
Amey Varangaonkar
30 Mar 2018
4 min read
Save for later

Unity Machine Learning Agents: Transforming Games with Artificial Intelligence

Amey Varangaonkar
30 Mar 2018
4 min read
Unity has undoubtedly been one of the leaders when it comes to developing cross-platform products - going from strengths to strengths in developing visually stimulating 2D as well as 3D games and simulations.With Artificial Intelligence revolutionizing the way games are being developed these days, Unity have identified the power of Machine Learning and introduced Unity Machine Learning Agents. With this, they plan on empowering the game developers and researchers in their quest to develop intelligent games, robotics and simulations. What are Unity Machine Learning Agents? Traditionally, game developers have been hard-coding the behaviour of the game agents. Although effective, this is a tedious task and it also limits the intelligence of the agents. Simply put, the agents are not smart enough. To overcome this obstacle, Unity have simplified the training process for the game developers and researchers by introducing Unity Machine Learning Agents (ML-Agents, in short). Through just a simple Python API, the game agents can be now trained to use deep reinforcement learning, an advanced form of machine learning, to learn from their actions and modify their behaviour accordingly. These agents can then be used to dynamically modify the difficulty of the game. How do they work? As mentioned earlier, the Unity ML-Agents are designed to work based on the concept of deep reinforcement learning, a branch of machine learning where the agents are trained to learn from their own actions. Here is a simple flowchart to demonstrate how reinforcement learning works: The reinforcement learning training cycle The learning environment to be configured for the ML-agents consists of 3 primary objects: Agent: Every agent has a unique set of states, observations and actions within the environment, and is assigned rewards for particular events. Brain: A brain decides what action any agent is supposed to take in a particular scenario. Think of it as a regular human brain, which basically controls the bodily functions. Academy: This object contains all the brains within the environment To train the agents, a variety of scenarios are made possible by varying the connection of different components (explained above) of the environment. Some are single agents, some simultaneous single agents, and others could be co-operative and competitive multi-agents and more. You can read more about these possibilities on the official Unity blog. Apart from the way these agents are trained, Unity are also adding some cool new features in these ML-agents. Some of these are: Monitoring the agents’ decision-making to make it more accurate Incorporating curriculum learning, by which the complexity of the tasks can eventually be increased to aid more effective learning Imitation learning is a newly-introduced feature wherein the agents simply mimic the actions we want them to perform, rather than they learning on their own. What next for Unity Machine Learning Agents? Unity recently announced the release of v0.3 beta SDK of the ML-agents, and have been making significant progress in this domain to develop smarter, more intelligent game agents which can be used with the Unity game engine. Still very much in the research phase, these agents can also be used as an example by academic researchers to study the complex behaviour of trained models in different environments and scenarios where the variables associated with the in-game physics and visual appearance can be altered. Going forward, these agents can also be used by enterprises for large scale simulations, in robotics and also in the development of autonomous vehicles. These are interesting times for game developers, and Unity in particular, in their quest for developing smarter, cutting-edge games. Inclusion of machine learning in their game development strategy is a terrific move, although it will take some time for this to be perfected and incorporated seamlessly. Nonetheless, all the research and innovation being put into this direction certainly seems well worth it!
Read more
  • 0
  • 0
  • 23272

article-image-admiring-many-faces-facial-recognition-deep-learning
Sugandha Lahoti
07 Dec 2017
7 min read
Save for later

Admiring the many faces of Facial Recognition with Deep Learning

Sugandha Lahoti
07 Dec 2017
7 min read
Facial recognition technology is not new. In fact, it has been around for more than a decade. However, with the recent rise in artificial intelligence and deep learning, facial technology has achieved new heights. In addition to facial detection, modern day facial recognition technology also recognizes faces with high accuracy and in unfavorable conditions. It can also recognize expressions and analyze faces to generate insights about an individual. Deep learning has enabled a power-packed face recognition system, all geared up to achieve widespread adoption. How has deep learning modernised facial recognition Traditional facial recognition algorithms would recognize images and people using distinct facial features (placement of eye, eye color, nose shape etc.) However, they failed in correct identification in cases of different lighting or slight change in the appearance ( beard growth, aging, or pose). In order to develop facial recognition techniques for a dynamic and ever-changing face, deep learning is proving to be a game changer. Deep Neural nets go beyond the approach of manual extraction. These AI based Neural Networks rely on image pixels to analyze features of a particular face. So they scan faces irrespective of the lighting, ageing, pose, or emotions. Deep learning algorithms remember each time they recognize or fail to recognize a problem. Thus, avoiding repeat mistakes and getting better at each attempt. Deep learning algorithms can also be helpful in converting 2D images to 3D. Facial recognition in practice: Facial Recognition Technology in Multimedia Deep learning enabled facial recognition technologies can be used to track audience reaction and measure different levels of emotions. Essentially it can predict how a member of the audience will react to the remaining film. Not only this, it also helps determine what percentage of users will be interested in a particular movie genre. For example, Microsoft’s Azure Emotion,  an emotion API detects emotions by analysing the facial expressions on an image or video content over time. Caltech and Disney have collaborated to develop a neural network which can track facial expressions. Their deep learning based Factorised Variational Autoencoders (FVAEs) analyze facial expressions of audience for about 10 minutes and then predict how their reaction will be for the rest of the film. These techniques help in estimating whether the viewers are giving the expected reactions at the right place. For example, the viewer is not expected to yawn on a comical scene. With this, Disney can also predict the earning potential of a particular movie. It can generate insights that may help producers create compelling movie trailers to maximize the number of footfalls. Smart TVs are also equipped with sophisticated cameras and deep learning algos for facial recognition ability. They can recognize the face of the person watching and automatically show channels and web applications programmed as their favorites. The British broadcasting corporation uses the facial recognition technology, built by CrowdEmotion. By tracking faces of almost 4,500 audience members watching show trailers, they gauge exact customer emotions about a particular programme. This in turn helps them generate insights to showcase successful commercials. Biometrics in Smartphones A large number of smartphones nowadays are instilled with biometric capabilities. Facial recognition in smartphones are not only used as a means of unlocking and authorizing, but also for making secure transactions and payments. In present times, there has been a rise in chips with built-in deep learning ability. These chips are embedded into smartphones. By having a neural net embedded inside the device, crucial face biometric data never leaves the device or sent to the cloud. This in turn improves privacy and reduces latency. Some of the real-world examples include Intel’s Nervana Neural Network Processor, Google’s TPU, Microsoft’s FPGA, and Nvidia’s Tesla V100. Deep learning models, embedded in a smartphone, can construct a mathematical model of the face which is then stored in the database. Using this mathematical face model, smartphones can easily recognize users even as their face ages or when it is obstructed by wearable accessories. Apple has recently launched the iPhone X facial recognition system termed as FaceID. It maps thousands of points on a user’s face using a projector and an infrared camera (which can operate under varied lighting conditions). This map is then passed to a bionic chip embedded in the smart phone. The chip has a neural network which constructs a mathematical model of the user’s face, used for biometric face verification and recognition. Windows Hello is also a facial recognition technology to unlock Windows smart devices equipped with infrared cameras. Qualcomm, a mobile technology organization, is working on a new depth-perception technology. It will include an image signal processor and high-resolution 3D depth-sensing cameras for facial recognition. Face recognition for Travel Facial recognition technologies can smoothen the departure process for a customer by eliminating the need for a boarding pass. A traveller is scanned by cameras installed at various check points, so they don’t have to produce a boarding pass at every step. Emirates is collaborating with Dubai Customs, Police and Airports to use a facial recognition technology solution integrated with the UAE Wallet app. The project is known as Together Initiative, it allows travellers to register and store their biometric facial data at several kiosks placed at the check-in area. This facility helps passengers to avoid presenting their physical documents at every touchpoint. Face recognition can also be used for determining illegal immigration. The technology compares the photos of passengers taken immediately before boarding, with the photos provided in their visa application. Biometric Exit, is an initiative by US government, which uses facial recognition to identify individuals leaving the country. Facial recognition technology can also be used at train stations to reduce the waiting time for  buying a train ticket or going through other security barriers. Bristol Robotics Laboratory has developed a software which uses infrared cameras to identify passengers as they walk onto the train platform. They do not need to carry tickets. Retail and shopping In the area of retail, smart facial recognition technologies can be helpful in fast checkout by keeping a track of each customer as they shop across a store. This smart technology, can also use machine learning and analytics to find trends in the shopper’s purchasing behavior over time and devise personalized recommendations. Facial video analytics and deep learning algorithms can also identify loyal and VIP shoppers from the moving crowd, giving them a privileged VIP experience. Thus, enabling them with more reasons to come back and make repeat purchases. Facial biometrics can also accumulate rich statistics about demographics(age, gender, shopping history) of an individual. Analyzing these statistics can generate insights, which helps organizations develop their products and marketing strategies. FindFace is one such platform that uses sophisticated deep learning technologies to generate meaningful data about the shopper. Its e-facial recognition system can verify faces with almost 99% accuracy. It can also help route the shopper data to a salesperson’s notice for personalized assistance. Facial recognition technology can also be used to make secure payment transactions simply by analysing a person’s face. AliBaba has set up a Smile to Pay face recognition system in KFC's. This system allows customers to make secure payments by merely scanning their face. Facial recognition has emerged as a hot topic of interest and is poised to grow. On the flip side, organizations deploying such technology should incorporate privacy policies as a standard measure. Data collected from such facial recognition software can also be used wrongly for targeting customers with ads, or for other illegal purposes. They should implement a methodical and systematic approach for using facial recognition for the benefit of their customers. This will not only help businesses generate a new source of revenue, but will also usher in a new era of judicial automation.  
Read more
  • 0
  • 0
  • 23263
Modal Close icon
Modal Close icon