Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Data

281 Articles
article-image-why-deepmind-open-sourced-sonnet
Sugandha Lahoti
03 Apr 2018
3 min read
Save for later

Why DeepMind made Sonnet open source

Sugandha Lahoti
03 Apr 2018
3 min read
DeepMind has always open sourced their projects with a bang. Last year, it announced that it is going to open source Sonnet, a library for quickly building neural network modules with Tensorflow. Deepmind shifted from Torch to Tensorflow as their choice of framework since early 2016, after it was acquired by Google in 2014. Why Sonnet if you have TensorFlow? Since adopting TensorFlow as the choice of their framework, DeepMind has enjoyed the flexibility and adaptiveness of TF for building higher-level frameworks. In order to build neural network modules with Tensorflow, they created a framework called Sonnet. Sonnet doesn’t typically replace TensorFlow; it just eases the process of constructing neural networks. Prior to Sonnet, DeepMind developers were forced to become intimately familiar with the underlying TensorFlow graphs in order to correctly architect its applications. With Sonnet, the creation of neural network components is quite easy as it first constructs Python objects which represent some part of a neural network, and then separately connect these objects into the TensorFlow computation graph. What makes Sonnet special? Sonnet uses Modules. Modules encapsulate elements of a neural network which in turn abstracts low-level aspects of TensorFlow applications. Sonnet enables developers to build their own Modules using a simple programming model. These Modules simplify the neural network training and can help to implement individual neural networks that can be combined to implement higher-level networks. Developers can also easily extend Sonnet by implementing their own modules. Using Sonnet, it becomes easier to switch between different models, allowing engineers to freely conduct experiments without worrying about hampering their entire projects. Why open source Sonnet? The announcement of Sonnet open sourcing came on April 7, 2017. Most people appreciated it as a move in the right direction. One of the focal purpose of DeepMind to open source Sonnet was to make the developer community to use Sonnet to take their own research forwards.  According to FossBytes, "DeepMind foresees Sonnet to be used by the community as a research propellant." With this open sourcing, the machine learning community can then more actively contribute back by utilizing Sonnet in their own projects. Moreover, if the community becomes accustomed and acquainted with DeepMind’s internal libraries, it will become easier for the DeepMind group to release other Machine learning models alongside research papers. Certain experienced developers also point out that using TensorFlow and Sonnet together is similar to using TensorFlow and Torch together, with a Reddit comment stating “DeepMind's trying to turn TensorFlow into Torch”. Nevertheless, open sourcing of Sonnet is seen as DeepMind’s part of their broader commitment to open source AI research. Also, as Sonnet is adopted by the community more similar frameworks are also likely to develop that make neural network construction easier using TensorFlow as the underlying runtime. Taking a further step towards democratization of machine learning and its subsidies. Sonnet is already available on GitHub and will be regularly updated by the DeepMind team to match the in-house version.
Read more
  • 0
  • 0
  • 20316

article-image-8-myths-rpa-robotic-process-automation
Savia Lobo
08 Nov 2017
9 min read
Save for later

8 Myths about RPA (Robotic Process Automation)

Savia Lobo
08 Nov 2017
9 min read
Many say we are on the cusp of the fourth industrial revolution that promises to blur the lines between the real, virtual and the biological worlds. Amongst many trends, Robotic Process Automation (RPA) is also one of those buzzwords surrounding the hype of the fourth industrial revolution. Although poised to be a $6.7 trillion industry by 2025, RPA is shrouded in just as much fear as it is brimming with potential. We have heard time and again how automation can improve productivity, efficiency, and effectiveness while conducting business in transformative ways. We have also heard how automation and machine-driven automation, in particular, can displace humans and thereby lead to a dystopian world. As humans, we make assumptions based on what we see and understand. But sometimes those assumptions become so ingrained that they evolve into myths which many start accepting as facts. Here is a closer look at some of the myths surrounding RPA. [dropcap]1[/dropcap] RPA means robots will automate processes The term robot evokes in our minds a picture of a metal humanoid with stiff joints that speaks in a monotone. RPA does mean robotic process automation. But the robot doing the automation is nothing like the ones we are used to seeing in the movies. These are software robots that perform routine processes within organizations. They are often referred to as virtual workers/digital workforce complete with their own identity and credentials. They essentially consist of algorithms programmed by RPA developers with an aim to automate mundane business processes. These processes are repetitive, highly structured, fall within a well-defined workflow, consist of a finite set of tasks/steps and may often be monotonous and labor intensive. Let us consider a real-world example here - Automating the invoice generation process. The RPA system will run through all the emails in the system, and download the pdf files containing details of the relevant transactions. Then, it would fill a spreadsheet with the details and maintain all the records therein. Later, it would log on to the enterprise system and generate appropriate invoice reports for each detail in the spreadsheet. Once the invoices are created, the system would then send a confirmation mail to the relevant stakeholders. Here, the RPA user will only specify the individual tasks that are to be automated, and the system will take care of the rest of the process. So, yes, while it is true that RPA involves robots automating processes, it is a myth that these robots are physical entities or that they can automate all processes. [dropcap]2[/dropcap] RPA is useful only in industries that rely heavily on software “Almost anything that a human can do on a PC, the robot can take over without the need for IT department support.” - Richard Bell, former Procurement Director at Averda RPA is a software which can be injected into a business process. Traditional industries such as banking and finance, healthcare, manufacturing etc that have significant tasks that are routine and depend on software for some of their functioning can benefit from RPA. Loan processing and patient data processing are some examples. RPA, however, cannot help with automating the assembly line in a manufacturing unit or with performing regular tests on patients. Even in industries that maintain daily essential utilities such as cooking gas, electricity, telephone services etc RPA can be put to use for generating automated bills, invoices, meter-readings etc. By adopting RPA, businesses irrespective of the industry they belong to can achieve significant cost savings, operational efficiency, and higher productivity. To leverage the benefits of RPA, rather than understanding the SDLC process, it is important that users have a clear understanding of business workflow processes and domain knowledge. Industry professionals can be easily trained on how to put RPA into practice. The bottom line - RPA is not limited to industries that rely heavily on software to exist. But it is true that RPA can be used only in situations where some form of software is used to perform tasks manually. [dropcap]3[/dropcap] RPA will replace humans in most frontline jobs Many organizations employ a large workforce in frontline roles to do routine tasks such as data entry operations, managing processes, customer support, IT support etc. But frontline jobs are just as diverse as the people performing them. Take sales reps for example. They bring new business through their expert understanding of the company’s products, their potential customer base coupled with the associated soft skills. Currently, they spend significant time on administrative tasks such as developing and finalizing business contracts, updating the CRM database, making daily status reports etc. Imagine the spike in productivity if these aspects could be taken off the plates of sales reps and they could just focus on cultivating relationships and converting leads. By replacing human efforts in mundane tasks within frontline roles, RPA can help employees focus on higher value-yielding tasks. In conclusion, RPA will not replace humans in most frontline jobs. It will, however, replace humans in a few roles that are very rule-based and narrow in scope such as simple data entry operators or basic invoice processing executives. In most frontline roles like sales or customer support, RPA is quite likely to change significantly at least in some ways how one sees their job responsibilities. Also, the adoption of RPA will generate new job opportunities around the development, maintenance, and sale of RPA based software. [dropcap]4[/dropcap] Only large enterprises can afford to deploy RPA The cost of implementing and maintaining the RPA software and training employees to use it can be quite high. This can make it an unfavorable business proposition for SMBs with fairly simple organizational processes and cross-departmental considerations. On the other hand, large organizations with higher revenue generation capacity, complex business processes, and a large army of workers can deploy an RPA system to automate high-volume tasks quite easily and recover that cost within a few months.   It is obvious that large enterprises will benefit from RPA systems due to the economies of scale offered and faster recovery of investments made. SMBs (Small to medium-sized businesses) can also benefit from RPA to automate their business processes. But this is possible only if they look at RPA as a strategic investment whose cost will be recovered over a longer time period of say 2-4 years. [dropcap]5[/dropcap] RPA adoption should be owned and driven by the organization's IT department The RPA team handling the automation process need not be from the IT department. The main role of the IT department is providing necessary resources for the software to function smoothly. An RPA reliability team which is trained in using RPA tools does not include IT professionals but rather business operations professionals. In simple terms, RPA is not owned by the IT department but by the whole business and is driven by the RPA team. [dropcap]6[/dropcap] RPA is an AI virtual assistant specialized to do a narrow set of tasks An RPA bot performs a narrow set of tasks based on the given data and instructions. It is a system of rule-based algorithms which can be used to capture, process and interpret streams of data, trigger appropriate responses and communicate with other processes. However, it cannot learn on its own - a key trait of an AI system. Advanced AI concepts such as reinforcement learning and deep learning are yet to be incorporated in robotic process automation systems. Thus, an RPA bot is not an AI virtual assistant, like Apple’s Siri, for example. That said, it is not impractical to think that in the future, these systems will be able to think on their own, decide the best possible way to execute a business process and learn from its own actions to improve the system. [dropcap]7[/dropcap] To use the RPA software, one needs to have basic programming skills Surprisingly, this is not true. Associates who use the RPA system need not have any programming knowledge. They only need to understand how the software works on the front-end, and how they can assign tasks to the RPA worker for automation. On the other hand, RPA system developers do require some programming skills, such as knowledge of scripting languages. Today, there are various platforms for developing RPA tools such as UIPath, Blueprism and more, which empower RPA developers to build these systems without any hassle, reducing their coding responsibilities even more. [dropcap]8[/dropcap] RPA software is fully automated and does not require human supervision This is a big myth. RPA is often misunderstood as a completely automated system. Humans are indeed required to program the RPA bots, to feed them tasks for automation and to manage them. The automation factor here lies in aggregating and performing various tasks which otherwise would require more than one human to complete. There’s also the efficiency factor which comes into play - the RPA systems are fast, and almost completely avoid faults in the system or the process that are otherwise caused due to human error. Having a digital workforce in place is far more profitable than recruiting human workforce. Conclusion One of the most talked about areas in terms of technological innovations, RPA is clearly still in its early days and is surrounded by a lot of myths. However, there’s little doubt that its adoption will take off rapidly as RPA systems become more scalable, more accurate and deploy faster. AI, cognitive, and Analytics-driven RPA will take it up a notch or two, and help the businesses improve their processes even more by taking away dull, repetitive tasks from the people. Hype can get ahead of the reality, as we've seen quite a few times - but RPA is an area definitely worth keeping an eye on despite all the hype.
Read more
  • 0
  • 0
  • 20308

article-image-how-should-web-developers-learn-machine-learning
Chris Tava
12 Jun 2017
6 min read
Save for later

How should web developers learn machine learning?

Chris Tava
12 Jun 2017
6 min read
Do you have the motivation to learn machine learning? Given its relevance in today's landscape, you should be motivated to learn about this field. But if you're a web developer, how do you go about learning it? In this article, I show you how. So, let’s break this down. What is machine learning? You may be wondering why machine learning matters to you, or how you would even go about learning it. Machine learning is a smart way to create software that finds patterns in data without having to explicitly program for each condition. Sounds too good to be true? Well it is. Quite frankly many of the state-of-the-art solutions to the toughest machine learning problems don’t even come close to reaching 100 percent accuracy and precision. This might not sound right to you if you’ve been trained, or have learned, to be precise and deterministic with the solutions you provide to the web applications you’ve worked on. In fact, machine learning is such a challenging problem domain that data scientists describe problems to be tractable or not. Computer algorithms can solve tractable problems in a reasonable amount of time with a reasonable amount of resources, whereas, in-tractable problems simply can’t be solved. Decades more of R&D is needed at a deep theoretical level, to bring approaches and frameworks forward that will then take years to be applied and be useful to society. Did I scare you off? Nope? Okay great. Then you accept this challenge to learn machine learning.  But before we dive into how to learn machine learning, let's answer the question: Why does learning machine learning matter to you?  Well, you're a technologist and as a result, it’s your duty, your obligation, to be on the cutting edge. The technology world is moving at a fast clip and it’s accelerating. Take for example, the shortened duration between public accomplishments of machine learning versus top gaming experts. It took a while to get to the 2011 Watson v. Jeopardy champion, and far less time between AlphaGo and Libratus. So what's the significance to you and your professional software engineering career? Elementary dear my Watson—just like the so-called digital divide between non-technical and technical lay people, there is already the start of a technological divide between top systems engineers and the rest of the playing field in terms of making an impact and disrupting the way the world works.  Don’t believe me? When’s the last time you’ve programmed a self-driving car or a neural network that can guess your drawings? Making an impact and how to learn machine learning The toughest part about getting started with machine learning is figuring out what type of problem you have at hand because you run the risk of jumping to potential solutions too quickly before understanding the problem. Sure you can say this of any software design task, but this point can’t be stressed enough when thinking about how to get machines to recognize patterns in data. There are specific applications of machine learning algorithms that solve a very specific problem in a very specific way and it’s difficult to know how to solve a meta-problem if you haven’t studied the field from a conceptual standpoint. For me, a break through in learning machine learning came from taking Andrew Ng’s machine learning course on courser. So taking online courses can be a good way to start learning.  If you don’t have the time, you can learn about machine learning through numbers and images. Let's take a look.  Numbers Conceptually speaking, predicting a pattern in a single variable based on a direct—otherwise known as a linear relationship with another piece of data—is probably the easiest machine learning problem and solution to understand and implement.  The following script predicts the amount of data that will be created based on fitting a sample data set to a linear regression model: https://github.com/ctava/linearregression-go. Because there is somewhat of a fit of the sample data to a linear model, the machine learning program predicted that the data created in the fictitious Bob’s system will grow from 2017, 2018.  Bob’s Data 2017: 4401Bob’s Data 2018 Prediction: 5707  This is great news for Bob and for you. You see, machine learning isn’t so tough after all. I’d like to encourage you to save data for a single variable—also known as feature—to a CSV file and see if you can find that the data has a linear relationship with time. The following website is handy in calculating the number of dates between two dates: https://www.timeanddate.com/date/duration.html. Be sure to choose your starting day and year appropriately at the top of the file to fit your data. Images Machine learning on images is exciting! It’s fun to see what the computer comes up with in terms of pattern recognition, or image recognition. Here’s an example using computer vision to detect that grumpy cat is actually a Persian cat: https://github.com/ctava/tensorflow-go-imagerecognition. If setting up Tensorflow from source isn’t your thing, not to worry. Here’s a Docker image to start off with: https://github.com/ctava/tensorflow-go. Once you’ve followed the instructions in the readme.md file, simply:  Get github.com/ctava/tensorflow-go-imagerecognition Run main.go -dir=./ -image=./grumpycat.jpg Result: BEST MATCH: (66% likely) Persian cat Sure there is a whole discussion on this topic alone in terms of what Tensorflow is, what’s a tensor, and what’s image recognition. But I just wanted to spark your interest so that maybe you’ll start to look at the amazing advances in the computer vision field. Hopefully this has motivated you to learn more about machine learning based on reading about the recent advances in the field and seeing two simple examples of predicting numbers, and classifying images.I’d like to encourage you to keep up with data science in general. About the Author  Chris Tava is a Software Engineering / Product Leader with 20 years of experience delivering applications for B2C and B2B businesses. His specialties include: program strategy, product and project management, agile software engineering, resource management, recruiting, people development, business analysis, machine learning, ObjC / Swift, Golang, Python, Android, Java, and JavaScript.
Read more
  • 0
  • 0
  • 20279

article-image-capsnet-capsule-networks-convolutional-neural-networks-cnns
Savia Lobo
13 Dec 2017
5 min read
Save for later

CapsNet: Are Capsule networks the antidote for CNNs kryptonite?

Savia Lobo
13 Dec 2017
5 min read
Convolutional Neural networks (CNNs), are a group from the neural network family that has manifested in areas such as Image recognition, classification, etc. They are one of the popular neural network models present in nearly all of the image recognition tasks that provide state-of-the-art-results. However, these CNNs have drawbacks, which are to be discussed later in the article. In order to address the issue with CNNs, Geoffrey Hinton, popularly known as the Godfather of Deep Learning, recently proposed a research paper along with two other researchers, Sara Sabour and Nicholas Frosst. In this paper, they introduced CapsNet or Capsule Network--a neural network, based on multi-layer capsule system. Let’s explore the issue with CNNs and how CapsNet came as an advancement to it. What is the issue with CNNs? Convolutional Neural Network or CNNs are known to seamlessly handle image classification tasks. They are experts in learning at a granular level; where the lower layers detect edges and shape of an object, and the higher layers detect the image as a whole. However, CNNs perform poorly when an image possesses a slightly different orientation (rotation or a tilt), as it compares every image with the ones it learns during training. For instance, if an image of a face is to be detected, it checks for facial features such as nose, two eyes, mouth, eyebrows, etc; irrespective of the placement. This means CNNs may identify an incorrect face in cases where the placement of an eye and the nose is not as conventionally expected, for example in case of the profile view. So, the orientation and the spatial relationships between the objects within an image is not considered by a CNN. To make CNNs understand orientation and spatial relationships, they were trained profusely with images taken from all possible angles. Unfortunately, it resulted in excess amount of time required to train the model. Also, the performance of the CNNs did not improve largely. Pooling methods were also introduced at each layer within the CNN model for two reasons; first  to reduce the time invested in training, and second to bring out positional invariance within CNNs. It resulted in triggering false positives in an image, i.e., it detected the object within an image but did not check its orientation. Also it incorrectly declared it as a right image. Thus, positional invariance made the CNNs susceptible to minute changes in viewpoint. Instead of invariance, what CNNs require is equivariance-- a feature that makes CNNs adapt to change in rotation or proportion within an image. This equivariance feature is now possible via Capsule Network! The Solution: Capsule Network CapsNet or Capsule network is an encapsulation of nested neural network layers. Traditional neural network contains multiple layers whereas a capsule network contains multiple layers within a single capsule. CNNs go deeper in terms of height, whereas the capsule network deepens in terms of nesting or internal structure. Such a model is highly robust to geometric distortions and transformations, which are a result of non-ideal camera angles. Thus, it is able to exceptionally handle orientations, rotations and so on. CapsNet Architecture Source: https://arxiv.org/pdf/1710.09829.pdf Key Features: Layer based Squashing In a typical Convolutional Neural Network, the squashing function is added to each layer of the CNN model. A squashing function compresses the input to one of the ends of a small interval, introducing nonlinearity to the neural network and enables the network to be effective. Whereas, in a Capsule network, the squashing function is applied to the vector output of each capsule. Given below is a squashing function proposed by Hinton in his research paper. Squashing function Source: https://arxiv.org/pdf/1710.09829.pd Instead of applying non-linearity to each neuron, the squashing function applies squashing to a group of neurons i.e the capsule. To be more precise, it applies nonlinearity to the vector output of each capsule. The squashing function also tries to squash the vector output to zero if it is a small vector. If the vector is too long, the function tries to limit the output vector to 1. Dynamic Routing Dynamic routing algorithm in CapsNet replaces the scalar-output feature detectors of the CNN with the vector-output capsules. Also, the max pooling feature in CNNs, which led to positional invariance, is replaced with ‘routing by agreement’. The algorithm ensures that when they forward propagate the data, it goes to the next most relevant capsule in the layer above. Although dynamic routing adds an extra computational cost to the capsule network, it has been proved to be advantageous to the network by making it more scalable and adaptable. Training the Capsule Network The capsule network is trained using the MNIST. MNIST is a dataset which includes more than 60,000 handwritten digit images. It is used to test machine learning algorithms. The capsule model is trained for 50 epochs with a batch size of 128 parts, where each epoch is responsible for a complete run through the training dataset. A TensorFlow implementation of the CapsNet based on Hinton’s research paper is available in GitHub repository. Similarly, CapsNet can also be implemented using other deep learning frameworks such as Keras, PyTorch, MXNet, etc. CapsNet is a recent breakthrough in the field of Deep learning and have a promise to benefit organizations with accurate image recognition tasks. Also, implementations with CapsNet is slowly catching up and is expected to reach at par like CNNs. They have been trained on a very simplistic dataset i.e the MNIST. They will still require to prove themselves on various other datasets. However, as time advances and we see CapsNet being trained within different domains, it will be exciting to discern how it moulds itself as a faster and more efficient training technique for deep learning models.
Read more
  • 0
  • 0
  • 20246

article-image-customer-relationship-management-just-got-better-artificial-intelligence
Savia Lobo
28 Jan 2018
8 min read
Save for later

Customer Relationship management just got better with Artificial Intelligence

Savia Lobo
28 Jan 2018
8 min read
According to an International Data Corporation (IDC) report, Artificial intelligence (AI) has the potential to impact many areas of customer relationship management (CRM). AI as an armor will ease out mundane tasks for the CRM teams, which implies they will be able to address more customer queries through an automated approach. An AI-based expert CRM offers highly predictive and intuitive ways to customer problems, thus grabbing maximum customer attention. With AI, CRM platforms within different departments such as sales, finance, marketing etc. do not limit themselves to getting service feedback from their customers. But they can also gain information based on the data that customers generate online i.e the social media or IoT devices. With such massive amount of data hailing from various channels, it becomes a bit tricky for organizations to keep a track of its customers. Not only this, but to extract detailed insights from huge amount of data becomes all the more difficult. And here is the gap where, organizations feel the need to bring in an AI-based optimized approach for their CRM platform. The AI-enabled platform can assist CRM teams to gain insights from the large aggregation of customer data, while also paving a way for seamless customer interactions. Organizations can not only provide customers with helpful suggestions, but also recommend products to boost their business profitability. AI-infused CRM platforms can take over straightforward tasks such as client feedback, that otherwise is time consuming. It allows businesses to focus on customers that provide higher business value, which might have got neglected previously. It also acts as a guide for executive level employees via a virtual assistant, allowing them to tackle customer queries without any assistance from senior executives. AI techniques such as Natural language processing(NLP) and predictive analytics are used within the CRM domain, to gain intelligent insights in order to enhance human decision making. NLP interprets incoming emails, categorizes them on the basis of intent, and automatically drafts responses by identifying the priority level. Predictive Analytics helps in detecting the optimal time for solving customer queries, and the mode of communication that will best fit to engage with the customer. With such functionalities, a smarter move towards digitizing organizational solutions can be achieved  reaping huge profits for organizations who wish to leverage it. How AI is transforming CRM Businesses aim to satisfy customers who utilize their services. This is because, keeping a customer happy can lead to further incrementation in revenue generation. Organizations can achieve this rapidly with the help of AI. Salesforce, the market leader in the CRM space, integrated an AI assistant which is popularly known as Einstein. Einstein makes CRM an easy-to-use platform by simply allowing customers to import their data on Salesforce and automatically provides ready-to-crunch data driven insights across different channels. Other organizations such as SAP and Oracle are implementing AI-based technologies for their CRM platforms to provide an improvised customer experience. Let’s explore how AI benefits within an organization: Steering Sales With AI, the sales team can shift their focus from the mundane administrative tasks and get to know their customers better. Sales CRM team leverages novel scoring techniques, which help in prioritizing quality leads, thus generating maximum revenue for the organization. Sales leaders, with help of AI can work towards improving sales productivity. After analyzing company’s historical data and employee activities, the AI-fused CRM software can present a performance report of the top sales representatives. Such a feature helps sales leaders to strategize what the bottom line representatives should learn from the top representatives to drive conversations with their customers that show a likelihood for sales generation. People.ai, a sales management platform, utilize AI to deliver performance analytics, personalized coaching, and provide reviews for their sales pipeline. This can assist sales leaders get a complete view of sales activities going on within their organizations. Marketing it better To trigger a customer sale requires extensive push marketing strategies.With Artificial Intelligence enabled marketing, customers are driven into a predictive journey, which ensures each journey to end up into a sale or a subscription. Both ways it is a win-win situation for the organizations. Predictive scoring can intelligently determine the likelihood of a customer to subscribe to a newsletter or trigger a purchase. AI can also analyze images across various social media sources such as Pinterest, Facebook, and can provide suggestions for visuals of an upcoming advertising campaign. Also, by carrying out sentiment analysis on product reviews and customer feedback, the marketing team can take into account, user’s sentiment about a particular brand or product. This helps brands to announce discount offers in case of a decreased sale, or increase the production of a product in demand. Marketo, a marketing automation platform includes a software which aids different CRM platforms to gain rich behavioral insights of their customers and to drive business strategies. 24*7 customer support Whenever a customer query arises within a CRM, AI anticipates the likely issues and resolves them before it results into a problem. Different customer cases are classified and directed to the right service agent to address with the help of predictive analytics techniques. Also, NLP-based digital assistants known as chatbots are used to analyze the written content within e-mails. A chatbot efficiently responds to customer e-mails; in most rare cases, it directs the e-mail to a service agent. Chatbots can even notify a customer about an early-bird offer to purchase a product, which they are likely to buy. It can also issue meetings and notify the same by scheduling reminders­‑given the era of push notifications and smart wearables. Hence, with AI into CRM, organizations can not only offer customers better services but also provide 24*7 support. Agent.ai, an AI-based customer service platform, allows organizations to provide a 24*7*365 customer support including holidays, weekends, and non-staffed hours. Application development no more a developer’s play Building an application has become an important milestone to achieve for any organization. If the application has a seamless and user-friendly interface, it is favoured by many customers and thus, the organization gets more customer traction. Building an application was considered as ‘a developers job only’ as it involves coding. However, due to the rise in platforms that help build an application with lesser coding or in fact no-coding, any non-coder can easily develop an application. CRM platforms helps businesses to build applications, which provides insight driven predictions and recommendation solutions to their customers. Salesforce assures their customers that each application built on their platform includes intelligent data modeling, tracking, and monitoring. Business users, data scientists, or any non-developer, can now build applications without learning to code. This helps them to create prediction-based applications their way; without the IT hassle. Challenges & Limitations AI implementations are becoming common with an increased number of organizations adopting it both on a small and a large scale. Many businesses are moving towards a smart customer management by infusing AI within their organizations. AI undoubtedly brings in an ease of work, but there are challenges that the CRM platform can face, which if unaddressed may cause revenue declination for businesses. Below are the challenges which organizations might face while setting up AI in their CRM platform: Outdated data: Organizations collect a huge amount of data during various business activities to drive meaningful insights about sales, customer preferences, etc. This data is a treasure trove for the marketing team, to plan strategies in order to attract more new customers and retain the existing ones. On the contrary, if the data provided is not updated,  CRM teams may find it difficult to understand the current customer relationship status. To avoid this, a comprehensive data cleanup project is essential to maintain better quality of data. Partially automated: AI creates an optimized environment  for the CRM with the use of  predictive analytics and natural language processing for better customer engagement. This eases out the mundane elements for the CRM team, and they can focus on other strategic outcomes. This does not imply that AI is completely replacing humans. Instead, a human touch is required to monitor if the solutions given by the AI benefits the customer and how they can tweak it to a much more smarter AI. Intricacies of language: An AI is trained on data which includes various set of phrases and questions, and also the desired output that it should give. If the query input by the customer is not phrased in a correct manner, the AI is unable to provide them with correct solutions. Hence, customers have to take precautions while asking their queries and phrase it in the correct manner, else the machine would not understand what the customer aims to ask. Infusing AI into CRM has multiple benefits, but the three most important ones include predictive scoring, forecasting, and recommendations. These benefits empower CRM to outsmart its traditional counterpart by helping organizations to serve its customers with state-of-the-art results. Customers appreciate when their query is addressed in lesser time,leaving a positive remark on the organization. Additionally we have digital assistants to assist firms in solving customer query quickly.
Read more
  • 0
  • 0
  • 20177

article-image-can-cryptocurrency-establish-a-new-economic-world-order
Amarabha Banerjee
22 Jul 2018
5 min read
Save for later

Can Cryptocurrency establish a new economic world order?

Amarabha Banerjee
22 Jul 2018
5 min read
Cryptocurrency has already established one thing - there is a viable alternative to dollars and gold as a measure of wealth. Our present economic system is flawed. Cryptocurrencies, if utilized properly, can change the way the world deals with money and wealth. But can it completely overthrow the present system and create a new economic world order? To know the answer to this we will have to understand the concept of cryptocurrencies and the premise for their creation. Money - The weapon to control the world Money is a measure of wealth, which translates into power. The power centers have largely remained the same throughout history, be it a monarchy, or autocracy or democracy. Power has shifted from one king to one dictator, to a few elected/selected individuals. To remain in power, they had to control the source and distribution of money. That’s why till date, only the government can print money and distribute it among citizens. We can earn money in exchange for our time and skills or loan money in exchange for our future time. But there’s only so much of time that we can give away and hence the present day economy always runs on the philosophy of scarcity and demand. The money distribution follows a trickle down approach in a pyramid structure. Source: Credit Suisse Inception of Cryptocurrency - Delocalization of money It’s abundantly clear from the image above that while printing of money is under the control of the powerful and the wealth creators, the pyramidal distribution mechanism also has ensured very less money flows to the bottom most segments of the population. The money creators have been ensuring their safety and prosperity throughout history, by accumulating chunks of money for themselves. Subsequently, the global wealth gap has increased staggeringly. This could have possibly triggered the rise of cryptocurrencies, as a form of an alternative economic system, that theoretically, doesn’t just accumulate at the top, but also rewards anyone who is interested in mining these currencies and spending their time and resources. The main concept that made this possible was the distributed computing mechanism which has gained tremendous interest in recent times. Distributed Computing, Blockchain & the possibilities The foundation of our present economic system is a central power, be it government or a ruler or dictator. The alternative of this central system is a distributed system, where every single node of communication contains the power of decision making and is equally important for the system. So if one node is cut-off, the system will not fall apart, it will keep on functioning. That’s what makes distributed computing terrifying for the centralized economic systems. Because they can’t just attack the creator of the system or use a violent hack to bring down the entire system. Source: Medium.com When the white paper on Cryptocurrencies was first published by the anonymous Satoshi Nakamoto, there was this hope of constituting a parallel economy, where any individual with an access to a mobile phone and internet might be able to mine bitcoins and create wealth, for not just himself/herself, but for the system also. Satoshi himself invented the concept of Blockchain, an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. Blockchain was the technology on top of which the first unit of Cryptocurrency, Bitcoins, were created. The concept of Bitcoin mining seemed revolutionary at that time. The more people that joined the system, the more enriched the system would become. The hope was that it would make the mainstream economic system take note and cause a major overhaul of the wealth distribution system. But sadly, none of that seems to have taken place yet. The phase of Disillusionment The reality is that bitcoin mining capabilities were controlled by system resources. The creators also had accumulated enough bitcoins for themselves similar to the traditional wealth creation system. Satoshi’s Bitcoin holdings were valued at $19.4 Billion during the Dec 2017 peak, making him the 44th richest person in the world during that time. This basically meant that the wealth distribution system was at fault again, very few could get their hands onto Bitcoins as their prices in traditional currencies had climbed. The government then duly played their part in declaring that trading in Bitcoins was illegal, cracking down on several cryptocurrency top guns. Recently different countries have joined the bandwagon to ban Cryptocurrency. Hence the value is much less now. The major concern is that the skepticism in public minds might kill the hype earlier than anticipated. Source: Bitcoin.com The Future and Hope for a better Alternative What we must keep in mind is that Bitcoins are just a derivative of the concept of Cryptocurrencies. The primary concept of distributed systems and the resulting technology - Blockchain, is still a very viable and novel one. The problem in the current Bitcoin system is the distribution mechanism. Whether we would be able to tap into the distributed system concept and create a better version of the Bitcoin model, only time will tell. But for the sake of better wealth propagation and wealth balance, we can only hope that this realignment of economic system happens sooner than later. Blockchain can solve tech’s trust issues – Imran Bashir A brief history of Blockchain Crypto-ML, a machine learning powered cryptocurrency platform
Read more
  • 0
  • 0
  • 20157
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-chatbot-toolkit-developers-design-develop-manage-conversational-ui
Bhagyashree R
10 Sep 2018
7 min read
Save for later

A chatbot toolkit for developers: design, develop, and manage conversational UI

Bhagyashree R
10 Sep 2018
7 min read
Although chatbots have been under development for at least a few decades, they did not become mainstream channels for customer engagement until recently. Due to serious efforts by industry giants like Apple, Google, Microsoft, Facebook, IBM, and Amazon, and their subsequent investments in developing toolkits, chatbots and conversational interfaces have become a serious contender to other customer contact channels. In this time, chatbots have been applied in various sectors and various conversational scenarios within sectors like retail, banking and finance, governmental, health, legal, and many more. This tutorial is an excerpt from a book written by Srini Janarthanam titled Hands-On Chatbots and Conversational UI Development. This book is organized as eight chatbot projects that will introduce the ecosystem of tools, techniques, concepts, and even gadgets relating to conversational interfaces. Over the last few years, an ecosystem of tools and services has grown around the idea of conversational interfaces. There are a number of tools that we can plug and play to design, develop, and manage chatbots. Mockup tools Mockups can be used to show clients as to how a chatbot would look and behave. These are tools that you may want to consider using during conversation design, after coming up with sample conversations between the user and the bot. Mockup tools allow you to visualize the conversation between the user and the bot and showcase the dynamics of conversational turn-taking. Some of these tools allow you to export the mockup design and make videos. BotSociety.io and BotMock.com are some of the popular mockup tools. Channels in Chatbots Channels refer to places where users can interact with the chatbot. There are several deployment channels over which your bots can be exposed to users. These include Messaging services such as Facebook Messenger, Skype, Kik, Telegram, WeChat, and Line Office and team chat services such as Slack, Microsoft Teams, and many more Traditional channels such as the web chat, SMS, and voice calls Smart speakers such as Amazon Echo and Google Home. Choose the channel based on your users and the requirements of the project. For instance, if you are building a chatbot targeting consumers, Facebook Messenger can be the best channel because of the growing number of users who use the service already to keep in touch with friends and family. To add your chatbot to their contact list may be easier than getting them to download your app. If the user needs to interact with the bot using voice in a home or office environment, smart speaker channels can be an ideal choice. And finally, there are tools that can connect chatbots to many channels simultaneously (for example, Dialogflow integration, MS Bot Service, and Smooch.io, and so on). Chatbot development tools There are many tools that you can use to build chatbots without having to code even a single line: Chatfuel, ManyChat, Dialogflow, and so on. Chatfuel allows designers to create the conversational flow using visual elements. With ManyChat, you can build the flow using a visual map called the FlowBuilder. Conversational elements such as bot utterances and user response buttons can be configured using drag and drop UI elements. Dialogflow can be used to build chatbots that require advanced natural language understanding to interact with users. On the other hand, there are scripting languages such as Artificial Intelligence Markup Language (AIML), ChatScript, and RiveScript that can be used to build chatbots. These scripts will contain the conversational content and flow that then needs to be fed into an interpreter program or a rules engine to bring the chatbot to life. The interpreter decides how to progress the conversation by matching user utterances to templates in the scripts. While it is straightforward to build conversational chatbots using this approach, it becomes difficult to build transactional chatbots without generating explicit semantic representations of user utterances. PandoraBots is a popular web-based platform for building AIML chatbots. Alternatively, there are SDK libraries that one can use to build chatbots: MS Bot Builder, BotKit, BotFuel, and so on provide SDKs in one or more programming languages to assist developers in building the core conversational management module. The ability to code the conversational manager gives developers the flexibility to mold the conversation and integrate the bot to backend tasks better than no-code and scripting platforms. Once built, the conversation manager can then be plugged into other services such as natural language understanding to understand user utterances. Analytics in Chatbots Like other digital solutions, chatbots can benefit from collecting and analyzing their usage statistics. While you can build a bespoke analytics platform from scratch, you can also use off-the-shelf toolkits that are widely available now. Many off-the-shelf analytics toolkits are available that can be plugged into a chatbot, using which incoming and outgoing messages can be logged and examined. These tools tell chatbot builders and managers the kind of conversations that actually transpire between users and the chatbot. The data will give useful information such as the conversational tasks that are popular, places where conversational experience breaks down, utterances that the bot did not understand, and the requests which the chatbots still need to scale up to. Dashbot.io, BotAnalytics, and Google's Chatbase are a few analytic toolkits that you can use to analyze your chatbot's performance. Natural language understanding Chatbots can be built without having to understand utterances from the user. However, adding the natural language understanding capability is not very difficult. It is one of the hallmark features that sets chatbots apart from their digital counterparts such as websites and apps with visual elements. There are many natural language understanding modules that are available as cloud services. Major IT players like Google, Microsoft, Facebook, and IBM have created tools that you can plug into your chatbot. Google's Dialogflow, Microsoft LUIS, IBM Watson, SoundHound, and Facebook's Wit.ai are some of the NLU tools that you can try. Directory services One of the challenges of building the bot is to get users to discover and use it. Chatbots are not as popular as websites and mobile apps, so a potential user may not know where to look to find the bot. Once your chatbot is deployed, you need to help users find it. There are directories that list bots in various categories. Chatbots.org is one of the oldest directory services that has been listing chatbots and virtual assistants since 2008. Other popular ones are Botlist.co, BotPages, BotFinder, and ChatBottle. These directories categorize bots in terms of purpose, sector, languages supported, countries, and so on. In addition to these, channels such as Facebook and Telegram have their own directories for the bots hosted on their channel. In the case of Facebook, you can help users find your Messenger bot using their Discover service. Monetization Chatbots are built for many purposes: to create awareness, to support customers after sales, to provide paid services, and many more. In addition to all these, chatbots with interesting content can engage users for a long time and can be used to make some money through targeted personalized advertising. Services such as CashBot.ai and AddyBot.com can integrate with your chatbot to send targeted advertisements and recommendations to users, and when users engage, your chatbot makes money. In this article, we saw tools that can help you build a chatbot, collect and analyze its usage statistics, add features like natural language understanding, and many more. The aforementioned is not an exhaustive list of tools and nor are the services listed under each type. These tools are evolving over time as chatbots are finding their niche in the market. This list gives you an idea of how multidimensional the conversational UI ecosystem is and help you explore the space and feed your creative mind. If you found this post useful, do check out the book, Hands-On Chatbots and Conversational UI Development, which will help you explore the world of conversational user interfaces. How to build a chatbot with Microsoft Bot framework Facebook’s Wit.ai: Why we need yet another chatbot development framework? How to build a basic server side chatbot using Go
Read more
  • 0
  • 0
  • 20077

article-image-the-most-asked-questions-on-big-data-privacy-and-democracy-in-last-months-international-hearing-by-canada-standing-committee
Savia Lobo
16 Jun 2019
16 min read
Save for later

The most asked questions on Big Data, Privacy and Democracy in last month’s international hearing by Canada Standing Committee

Savia Lobo
16 Jun 2019
16 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing took place on May 28, and includes the following witnesses: - Jim Balsillie, Chair, Centre for International Governance Innovation; Retired Chairman and co-CEO of BlackBerry - Roger McNamee, Author of Zucked: Waking up to the Facebook Catastrophe - Shoshana Zuboff, Author of The Age of Surveillance Capitalism - Maria Ressa, CEO and Executive Editor, Rappler Witnesses were asked various questions based on data privacy, data regulation, the future of digital tech considering current data privacy model, and much more. Why we cannot enforce independent regulators to oversee user rights data privacy Damion Collins to McNamee:  “In your book you said as far as I can tell Zack has always believed that users value privacy more than they should. On that basis, do you think we will have to establish in law the standards we want to see enforced in terms of users rights data privacy with independent regulators to oversee them? because the companies will never do that effectively themselves because they just don't share the concerns we have about how the systems are being abused” Roger McNamee: “I believe that it's not only correct in terms of their philosophy, as Professor Zuboff points out, but it is also baked into their business model--this notion--that any data that exists in the world, claimed or otherwise, they will claim for their own economic use and framing. How you do that privacy, I think is extremely difficult and in my opinion, would be best done by simply banning the behaviors that are used to gather the data.” Zuckerberg is more afraid of privacy regulation Jo Stevens, Member of Parliament for Cardiff Central, asked McNamee,  “What you think Mark Zuckerberg is more frightened about privacy regulation or antitrust action?” McNamee replied saying that Zuckerberg is more afraid of privacy.  He further adds, “to Lucas I would just say the hardest part of this is setting the standard of what the harm is these guys have hidden behind the fact that's very hard to quantify many of these things.” In the future can our homes be without digital tech? Michel Picard, Member of the Canadian House of Commons asked Zuboff, “your question at the beginning is, can the digital future be our home? My reaction to that was, in fact, the question should be in the future home be without digital.” Zubov replied, “that's such an important distinction because I don't think there's a single one of us in this room that is against the digital per se. It's this is not about being anti-technology, it's about technology being hijacked by a rogue economic logic that has turned it to its own purposes. We talked about the idea that conflating the digital with surveillance capitalism is a dangerous category error. What we need is to be able to free the potential of the digital to get back to those values of Democritus democratization of knowledge and individual emancipation and empowerment that it was meant to serve and that it still can serve.” Picard further asks, “compared to the Industrial Revolution where somewhere although we were scared of the new technology, this technology was addressed to people for them to be beneficiaries of that progress, now, it's we're not beneficiary at all. The second step of this revolution, it is a situation where people become a producer of the raw material and as you mentioned as you write “Google's invention reveals new capabilities to infer and deduce the thoughts feelings intention interests of individual and groups with an automated architecture that operates as a one-way mirror irrespective of a person's awareness. So like people connected to the machine and matrix.” Zuboff replies, “From the very beginning the data scientists at Google, who are inventing surveillance capitalism, celebrated in their written patterns and in their research, published research, the fact that they could hunt and capture behavioral surplus without users ever being aware of these backstage operations. Surveillance was baked into the DNA of this economic logic essential to its strange form of value creation. So it's with that kind of sobriety and gravitas that it is called surveillance capitalism because without the surveillance piece it cannot exist.” Can Big data be simply pulled out of jurisdictions in the absence of harmonized regulation across democracies? Peter Kent, Member of Parliament Thornhill, asked Balsillie, “with regards to what we've seen that Google has said in response to the new federal elections, the education on advertising will simply withdraw from accepting advertising. Is it possible that big data could simply pull out of jurisdictions where regulations, in the absence of harmonized regulation, across the democracies are present?” To this, Balsillie replies, “ well that's the best news possible because as everyone's attested here. The purpose of surveillance capitalism is to undermine personal autonomy and yet elections democracy are centered on the sovereign self exercised their sovereign will. Now, why in the world would you want to undermine the core bedrock of election in a non-transparent fashion to the highest bidder at the very time your whole citizenry is on the line and in fact, the revenue for is immaterial to these companies. So one of my recommendations is, just banning personalized online ads during elections. We have a lot of things you're not allowed to do for six or eight weeks just put that into the package it's simple and straightforward.” McNamee further adds his point on the question by saying, “point that I think is being overlooked here which is really important is, if these companies disappeared tomorrow, the services they offer would not disappear from the marketplace. In a matter of weeks, you could replicate Facebook, which would be the harder one. There are substitutes for everything that Google does that are done without surveillance capitalism. Do not in your mind allow any kind of connection between the services you like and the business model of surveillance capitalism. There is no inherent link, none at all this is something that has been created by these people because it's wildly more profitable.” Committee lends a helping hand as an ‘act of Solidarity’ to press freedom Charlie Angus, a member of the Canada House of Commons, “Facebook and YouTube transformed the power of indigenous communities to speak to each other, to start to change the dynamic of how white society spoke about them. So I understand its incredible power for the good. I see more and more thought in my region which has self-radicalized people like the flat earthers, anti-vaxxers, 9/11 truthers and I've seen its effect in our elections through the manipulation of anti-immigrant anti-muslim materials. People are dying in Asia for the main implication of these platforms. I want to ask you is there some in an act of solidarity with our Parliament with our legislators if there are statements that should be made public through our Parliament to give you support so that we can maintain a link with you as an important ally on the front line.” Ressa replied, “Canada has been at the forefront of holding fast to the values of human rights of press freedom. I think the more we speak about this then the more the values are reiterated especially since someone like president Trump truly likes president detective and vice versa it's very personal. But sir, when you talked about  where people are dying you've seen this all over Asia there's Myanmar there is the drug war here in the Philippines, India and Pakistan just instances when this tool for empowerment just like in your district it is something that we do not want to go away not shut down and despite the great threats that we face that I face and my company faces Facebook the social media platforms still give us the ability to organize to create communities of action that had not been there before.” Do fear, outrage, hate speech, conspiracy theories sell more than truths? Edwin Tong, a member of the Singapore parliament asked McNamee, on the point McNamee made during his presentation that “the business model of these platforms really is focussed on algorithms that drive content to people who think they want to see this content. And you also mentioned that fear outraged hate speech conspiracy theories is what sells more and I assume what you mean to say by that is it sells more than truths, would that be right?” McNamee replied, “So there was a study done at MIT in Cambridge Massachusetts that suggested, disinformation spreads 70% further and six times faster than fact and there are actually good human explanations for why hate speech and conspiracy theories move so rapidly it's just it's about treating the flight-or-fight reflex.” Tong further highlighted what Ressa said about how this information is spread through the use of BOTS. “I think she said 26 fake accounts is translating the 3 million different accounts which spread the information. I think we are facing a situation where disinformation if not properly checked gets exponentially viral. People get to see it all the time and overtime unchecked this leads to a serious erosion of trust serious undermining of institutions we can't trust elections and fundamentally democracy becomes marginalized and eventually demolished.”   To this, McNamee said, “I agree with that statement completely to me the challenge is in how you manage it so if you think about this censorship and moderation were never designed to handle things at the scale that these Internet platforms operate at. So in my view, the better strategy is to do the interdiction upstream to either ask the fundamental question of what is the role of platforms like this in society right and then secondly what's the business model associated with them. So to me, what you really want to do my partner Renee de resto who's a researcher in this area it talks about the issue of freedom of speech versus freedom of reach. The latter being the amplification mechanism and so what's really going on on these platforms is the fact that the algorithms find what people engage with and amplify that more and sadly hate speech disinformation conspiracy theories are, as I said the catnip that's what really gets the algorithms humming and gets people to react and so in that context eliminating that amplification is essential and the question is how you're gonna go about doing that and how are you gonna how are you going to essentially verify that it's been done and in my mind the simplest way to do that's to prevent the data from getting in there in the first place.” Tong further said, “I think you must go upstream to deal with it fundamentally in terms of infrastructure and I think some witnesses also mentioned that we need to look at education which I totally agree with but when it does happen and when you have that proliferation of false information there must be a downstream or an end result kind of reach and that's where I think your example of Sri Lanka is very pertinent because it shows and demonstrates that left uncheck the platforms to do nothing about they're about the false information is wrong and what we do need is to have regulators and governments be clothed with powers and levers to intervene, intervene swiftly, and to disrupt the viral spread of online falsehoods very quickly would you agree as a generalization.” McNamee said, “I would not be in favor of the level of government intervention I have recommended here I simply don't see alternatives at the moment that in order to do what Shoshanna's talked about in order to do what Jim is talking about you have to have some leverage and the only leverage governments have today is their ability to shut these things down well nothing else works quickly enough.” Sun Xueling, another member from the Parliament of Singapore asked McNamee, “I like to make reference to the Christchurch shooting on the 15th of March 2019 after which the New York Times had published an article by Kevin Roos.” She quoted what Roos mentioned in his article, “We do know that the design of Internet platforms can create and reinforce extremist beliefs. Their recommendation algorithms often steer users towards a jeer content, a loop that results in more time spent on the app, and more advertising revenue for the company.” McNamee said, “not only do I agree with that I would like to make a really important point which is that the design of the Internet itself is part of the problem that I'm of the generation as Jim is as well that were around when the internet was originally conceived in design and the notion in those days was that people could be trusted with anonymity and that was a mistake because bad actors use anonymity to do bad things and the Internet is essentially enabled disaffected people to find each other in a way they could never find each other in the road and to organize in ways they could not in the real world so when we're looking at Christchurch we have to recognize that the first step this was this was a symphonic work this man went in and organized at least a thousand co-conspirators prior to the act using the anonymous functions of the internet to gather them and prepare for this act. It was then and only then after all that groundwork had been laid that the amplification processes of the system went to work but keep in mind those same people kept reposting the film; it is still up there today.” How can one eliminate the tax deductibility of specific categories of online ads? Jens Zimmermann, from the Republic of Germany asked Jim Basse to explain a bit more deeply “ the question of taxation”, which he mentioned in one of his six recommendations. To this Balsillie said, “I'm talking about those that are buying the ads. The core problem here is when your ad driven you've heard extremely expert testimony that they'll do whatever it takes to get more eyeballs and the subscription-based model is a much safer place to be because it's not attention driven and one of the purposes of taxes to manage externalities if you don't like the externalities that we're grappling with that are illuminated here then disadvantage those and many of these platforms are moving more towards subscription-based models anyway. So just use tax as a vehicle to do that and the good benefit is it gives you revenue this the second thing it could do is also begin to shift towards more domestic services. I think it attacks has not been a lever that's been used and it's right there for you all right.” Thinking beyond behavioral manipulation, data surveillance-driven business models Keit Pentus, the representative from Estonia asked McNamee, “If you were sitting in my chair today, what would be the three steps you would recommend or you would do if we leave those shutting down the platforms aside for a second.” McNamee said, “In the United States or in North America roughly 70% of all the artificial intelligence professionals are working at Google, Facebook, Microsoft, or Amazon and to a first approximation they're all working on behavioral manipulation. There are at least a million great applications of artificial intelligence and behavioral manipulation is not on them. I would argue that it's like creating time-release anthrax or cloning human babies. It's just a completely inappropriate and morally repugnant idea and yet that is what these people are doing. I would simply observe that it is the threat of shutting them down and the willingness to do it for brief periods of time that creates the leverage to do what I really want to do which is, to eliminate the business model of behavioral manipulation and data surveillance.” “I don't think this is about putting the toothpaste back into tubes, this is about formulating toothpaste that doesn't poison people. I believe this is directly analogous to what happened with the chemical industry in the 50s. The chemical industry used to pour its waste products, mercury, chromium, and things like that direct into freshwater, which left mine tailings on the side of hills. State petrol stations would pour spent oil into sewers and there were no consequences. So the chemical industry grew like crazy, had incredibly high marches. It was the internet platform industry of its era. And then one day society woke up and realized that those companies should be responsible for the externalities that they were creating. So, this is not about stopping progress this is my world this is what I do.” “I just think we should stop hurting people we should stop killing people in Myanmar, we should stop killing people in the Philippines, and we should stop destroying democracy everywhere else. We can do way better than that and it's all about the business model, and I don't want to pretend I have all the solutions what I know is the people in this room are part of the solution and our job is to help you get there. So don't view anything I say as a fixed point of view.” “This is something that we're gonna work on together and you know the three of us are happy to take bullets for all of you okay because we recognize it's not easy to be a public servant with these issues out there. But do not forget you're not gonna be asking your constituents to give up the stuff they love. The stuff they love existed before this business model and it'll exist again after this business pop.” To know more and listen to other questions asked by some other representatives, you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? UK lawmakers to social media: “You’re accessories to radicalization, accessories to crimes”, hearing on spread of extremist content Key Takeaways from Sundar Pichai’s Congress hearing over user data, political bias, and Project Dragonfly
Read more
  • 0
  • 0
  • 20002

article-image-ux-designers-can-teach-machine-learning-engineers-start-model-interpretability
Sugandha Lahoti
18 Dec 2017
7 min read
Save for later

What UX designers can teach Machine Learning Engineers? To start with: Model Interpretability

Sugandha Lahoti
18 Dec 2017
7 min read
Machine Learning is driving many major innovations happening around the world. But while complex algorithms drive some of the most exciting inventions, it's important to remember that these algorithms are always designed. This is why incorporating UX into machine learning engineering could offer a way to build even better machine learning systems that put users first. Why we need UX design in machine learning Machine learning systems can be complex. They require pre-trained data, and depend on a variety of variables to allow the algorithm to make 'decisions'. This means transparency can be difficult - and when things go wrong, it isn't easy to fix. Consider the ways that machine learning systems can be biased against certain people - that's down to problems in the training set and, subsequently, how the algorithm is learning. If machine learning engineers took a more user-centric approach to building machine learning systems - borrowing some core principles from UX design - they could begin to solve these problems and minimize the risk of algorithmic bias. After all, every machine learning model has an end user. Whether its for recommending products, financial forecasting, or driving a car, the model is always serving a purpose for someone. How UX designers can support machine learning engineers By and large, machine learning today is mostly focused on the availability of data and improving model performance by increasing their learning capabilities. However, in this a user-centric approach may be compromised. A tight interplay between UX design practices and machine learning is therefore highly essential to make ML discernible to all and to achieve model interpretability. UX Designers can contribute in a series of tasks that can improve algorithmic clarity. Most designers create a wireframe, which is a rough guide for the layout of a website or an app. The principles behind wireframing can be useful for machine learning engineers as they prototype their algorithms. It provides a space to make considerations about what's important from a user perspective. User testing is also useful in the context of machine learning. Just as UX designers perform user testing for applications, going through a similar process for machine learning systems makes sense. This is most clear in the way companies test driverless cars, but anywhere that machine learning systems require or necessitate human interaction should go through some period of user testing.  UX Design approach can help in building ML algorithms according to different contexts and different audiences. For example, we take a case of an emergency room in an hospital. Often the data required for building a decision support system for Emergency patient cases is quite sparse. Machine Learning can help in mining relevant datasets and divide them into subgroup of patients. UX Design here, can play the role of designing a particular part of the Decision Support system. UX professionals bring in a Human Centered Design to ML components. This means they also consider user perspective while integrating ML components. Machine Learning models generally tend to take the entire control from the user. For instance, in a driverless vehicle, the car determines the route, speed, and other decisions. Designers also include user controls so that they do not lose their voice in the automated system. Machine Learning developers, at times may unintentionally introduce implicit biases in the systems, which can have serious or negative side effects. A recent example of this was Microsoft’s Tay, a Twitter bot that started tweeting racist comments spending just a few hours on Twitter. UX Designers plan for these biases on a project by project level as well as on a larger level, advocating for a broad range of voices. They also keep an eye on the social impact of the ML systems by keeping a check on the input (as was the case with Microsoft Tay). This is done to ensure that an uncontrolled input does not lead to an unintended output. What are the benefits of bringing UX design into machine learning? All Machine Learning systems and practitioners can benefit from incorporating UX design practice as a standard. Some benefits of this collaboration are: Results generated from UX enabled ML algorithms will be transparent and easy to understand It helps end-users understand the product functioning and visualize the results better Better understanding of algorithm results builds user’s trust towards the system. This is important if the consequences of incorrect results are detrimental to the user. It helps data scientists better analyse the results of an algorithm to subsequently make better predictions. It aids in understanding the different components of model building: from designing, to development, to final deployment. UX designers focus on building transparent ML systems by defining the problem through a storyboard rather than on constraints placed by data and other aspects. They become aware of and catch biases ensuring an unbiased Machine learning system. All of this, ultimately results in better product development and improved user experience. How do companies leverage UX Design with ML Top-notch companies are looking at combining the benefits of UX design with Machine Learning to build systems which balance the back-end work (performance and usability) with the front-end (user-friendly outputs). Take Facebook for example. Their News Feed Ranking algorithm, an amalgamation of ML and UX design, works on two goals. The first is showing the right content at the right time, which involves Machine Learning capabilities. The other is enhancing user interaction by displaying posts more prominently so as to create more customer engagement and increase user dwelling time. Google’s UX community has combined UX Design with machine learning in an initiative known as—human-centered machine learning (HCML). In this project, UX designers work in sync with ML developers to help them create unique Machine Learning products catering to human understanding. ML developers are in turn taught how to integrate UX into ML algorithms for better user experience. Airbnb created an algorithm to dynamically alter and set prices for their customers units. However, on interacting with their customers, they found that users were hesitant of giving full control to the system. Hence the UX Design team altered the design, to add functionalities of minimum and maximum rent allowed. They also created a setting that allowed customers to set the general frequency of rentals. Thus, they approached the machine learning project with user experience keeping in mind. Salesforce has a Lightning Design System which includes a centralized design systems team of researchers, accessibility specialists, lead product designers, prototypers, and UX engineers. They work towards documenting visual systems and abstraction of design patterns to assist ML developers. Netflix has also plunged into this venture by offering their customers with personalized recommendations as well as personalized visuals. They have a personalized artwork or imagery to portray their titles. The artwork representing their titles is adjusted to capture the attention of a particular user. This, in turn, acts as a gateway into that title and gives users a visual perception as to why a TV show or a movie is good for them. Thus helping them achieve user engagement as well as user retention. The road ahead In future, we would see most organizations having a blend of UX Designers and data scientists in their teams to create user-friendly products. UX Designers would work closely with developers to find unique ways of incorporating design ethics and abilities in machine learning findings and predictions. This would lead to new and better job opportunities for both designers and developers with further expansion on their skill sets. In fact, it would give rise to a hybrid language, where algorithmic implementations will be consolidated with design to make ML frameworks simpler for the clients.
Read more
  • 0
  • 0
  • 19982

article-image-why-twitter-finally-migrated-to-tensorflow
Amey Varangaonkar
18 Jul 2018
3 min read
Save for later

Why Twitter (finally!) migrated to Tensorflow

Amey Varangaonkar
18 Jul 2018
3 min read
A new nest in the same old tree. Twitter have finally migrated to Tensorflow as their preferred choice of machine learning framework. While not many are surprised by this move given the popularity of Tensorflow, many have surely asked the question - ‘What took them so long?’ Why Twitter migrated to Tensorflow only now Ever since its inception, Twitter have been using their trademark internal system called as DeepBird. This system was able to utilize the power of machine learning and predictive analytics to understand user data, drive engagement and promote healthier conversations. DeepBird primarily used Lua Torch to power its operations. As the support for the language grew sparse due to Torch’s move to PyTorch, Twitter decided it was high time to migrate DeepBird to support Python as well - and started exploring their options. Given the rising popularity of Tensorflow, it was probably the easiest choice Twitter had to make for some time. Per the recently conducted Stack Overflow Developer Survey 2018, Tensorflow is the most loved framework by the developers, with almost 74% of the respondents showing their loyalty towards it. With Tensorflow 2.0 around the corner, the framework promises to build on its existing capabilities by adding richer machine learning features with cross-platform support - something Twitter will be eager to get the most out of. How does Tensorflow help Twitter? After incorporating Tensorflow into DeepBird, Twitter were quick to share some of the initial results. Some of the features that stand out are: Higher engineer productivity - With the help of Tensorboard and some internal data viz tools such as Model Repo, it has become a lot easier for Twitter engineers to observe the performance of the models and tweak them to obtain better results. Easier access to Machine Learning - Tensorflow simplified machine learning models which can be integrated with other technology stacks due to the general-purpose nature of Python. Better performance - The overall performance of DeepBird v2 was found to be better than its predecessor which was powered by Lua Torch. Production-ready models - Twitter plan to develop models that can be integrated to the workflow with minimal issues and bugs, as compared to other frameworks such as Lua Torch. With Tensorflow in place, Twitter users can expect their timelines to be full of relatable, insightful and high quality interactions which they can easily be a part of. Tweets will be shown to readers based on their relevance, and Tensorflow will be able to predict how a particular user will react to them. A large number of heavyweights have already adopted Tensorflow as their machine learning framework of choice  - eBay, Google, Uber, Dropbox, and Nvidia being some of the major ones. As the list keeps on growing, one can only wonder which major organization will be next on the list. Read more TensorFlow 1.9.0-rc0 release announced Python, Tensorflow, Excel and more – Data professionals reveal their top tools Distributed TensorFlow: Working with multiple GPUs and servers  
Read more
  • 0
  • 0
  • 19938
article-image-glancing-fintech-growth-story-powered-ml-ai-apis
Kartikey Pandey
14 Dec 2017
4 min read
Save for later

Glancing at the Fintech growth story - Powered by ML, AI & APIs

Kartikey Pandey
14 Dec 2017
4 min read
When MyBucks, a Luxembourg based Fintech firm, started scaling up their business in other countries. They faced a daunting challenge of reducing the timeline for processing credit requests from over a week’s time to just under few minutes. Any financial institution dealing with lending could very well relate to the nature of challenges associated with giving credit - checking credit history, tracking past fraudulent activities, and so on. This automatically makes the lending process tedious and time consuming. To add to this, MyBucks also aimed to make their entire lending process extremely simple and attractive to customers. MyBucks’ promise to its customers: No more visiting branches and seeking approvals. Simply login from your mobile phone and apply for a loan - we will handle the rest in a matter of minutes. Machine Learning has triggered a whole new segment in the Fintech industry- Automated Lending Platforms. MyBucks is one such player. Some other players in this field are OnDeck, Kabbage, and Lend up. What might appear transformational with Machine Learning in MyBucks’ case is just one of the many examples of how Machine Learning is empowering a large number of finance based companies to deliver disruptive products and services. So what makes Machine Learning so attractive to Fintech and how has Machine Learning fuel this entire industry’s phenomenal growth? Read on. Quicker and efficient credit approvals Long before Machine Learning was established in large industries unlike today, it was quite commonly used to solve fraud detection problems. This primarily involved building a self-learning model that used a training dataset to begin with and further expanding its learning based on incoming data. This way the system could distinguish a fraudulent activity from a non-fraudulent one. Modern day Machine Learning systems are no different. They use the very same predictive models that rely on segmentation algorithms and methods. Fintech companies are investing in big data analytics and machine learning algorithms to make credit approvals quicker and efficient. These systems are designed in such a way that they pull data from several sources online, develop a good understanding of transactional behaviours, purchasing patterns, and social media behavior and accordingly decide creditworthiness. Robust fraud prevention and error detection methods Machine Learning is empowering banking institutions and finance service providers to embrace artificial intelligence and combat what they fear the most-- fraudulent activities. Faster and accurate processing of transactions has always been the fundamental requirement in the finance industry. An increasing number of startups are now developing Machine Learning and Artificial Intelligence systems to combat the challenges around fraudulent transactions or even instances of incorrectly reported transactions. Billguard is one such company that uses big data analytics and makes sense of millions of consumers who report billing complaints. The AI system then builds its intelligence by using this crowd-sourced data and reports incorrect charges back to consumers thereby helping get their money back. Reinventing banking solutions with the powerful combination of APIs and Machine Learning Innovation is key to survival in the finance industry. The 2017 PwC global fintech report suggests that the incumbent finance players are worried about the advances in the Fintech industry that poses direct competition to banks. But the way ahead for banks definitely goes through Fintech that is evolving everyday. In addition to Machine Learning, ‘API’ is the other strong pillar driving innovation in Fintech. Developments in Machine Learning and AI are reinventing the traditional lending industry and APIs are acting as the bridge between classic banking problems and the future possibilities. Established banks are now taking the API (Application Programming Interface) route to tie up with innovative Fintech players in their endeavor to deliver modern solutions to customers. Fintech players are also able to reap the benefits of working with the old guard, banks, in a world where APIs have suddenly become the new common language. So what is this new equation all about? API solutions are helping bridge the gap between the old and the new - by helping collaborate in newer ways to solve traditional banking problems. This impact can be seen far and wide within this industry and Fintech as an industry isn’t just limited to lending tech and everyday banking alone. There are several verticals within the industry that now find increased impact of Machine Learning -payments, wealth management, capital markets, insurance, blockchain and now even chatbots for customer service to name a few. So where do you think this partnership is headed? Please leave your comments below and let us know.
Read more
  • 0
  • 0
  • 19919

article-image-ai-cold-war-between-china-and-the-usa
Neil Aitken
28 Jun 2018
6 min read
Save for later

The New AI Cold War Between China and the USA

Neil Aitken
28 Jun 2018
6 min read
The Cold War between the United States and Russia ended in 1991. However, considering the ‘behind the scenes’ behavior of the world’s two current Super Powers – China and the USA, another might just be beginning. This time around, many believe that the real battle doesn’t relate to the trade deficit between the two countries, despite new stories detailing the escalation of trade tariffs. In the next decade and a half, the real battle will take place between China and the USA in the technology arena, specifically, in the area of Artificial Intelligence or AI. China’s not shy about it’s AI ambitions China has made clear its goals when it comes to AI. It has publicly announced its plan to be the world leader in Artificial Intelligence by 2030. The country has learned a hard lesson, missing out on previous tech booms, notably, in the race for internet supremacy early this century. Now, they are taking a far more proactive stance. The AI market is estimated to be worth $150 billion per year by 2030, slightly over a decade from now, and China has made very clear public statements that the country wants it all. The US, in contrast has a number of private companies striving to carve out a leadership position in AI but no holistic policy. Quite the contrary, in fact. Trumps government say, “There is no need for an AI moonshot, and that minimizing government interference is the best way to make sure the technology flourishes.” What makes China so dangerous as an AI Threat ? China’s background and current circumstance gives them a set of valuable strategic advantages when it comes to AI. AI solutions are based, primarily, on two things. First, of critical importance is the amount of data available to ‘train’ an AI algorithm and the relative ease or difficulty of obtaining access to it. Secondly, the algorithm which sorts the data, looking for patterns and insights, derived from research, which are used to optimize the AI tools which interpret it. China leads the world on both fronts. China has more data: China’s population is 4 times larger than the US’s giving them a massive data advantage. China has a total of 730 million daily internet users and 704 million smartphone mobile internet users. Each of the connected individuals uses their phone, laptop or tablet online each day. Those digital interactions leave logs of location, time, action performed and many other variables. In sum then, China’s huge population is constantly generating valuable data which can be mined for value. Chinese regulations give public and private agencies easier access to this data: Few countries have exemplary records when it comes to human rights. Both Australia, and the US, for example, have been rebuked by the UN for their treatment of immigration in recent years. Questions have been asked of China too. Some suggest that China’s centralized government, and alleged somewhat shady history when it comes to human rights means they can provide internet companies with more data, more easily, than their private equivalents in the US could dream of. Chinese cybersecurity laws require companies doing business in the country to store their data locally. The government has placed one state representative on the board of each of their major tech companies, giving them direct, unfettered central government influence in the strategic direction and intent of those companies, especially when it comes to coordinating the distribution of the data they obtain. In the US, data leakage is one of the most prominent news stories of 2018. Given Facebook’s presentation to congress around the Facebook/Cambridge Analytica data sharing scandal, it would be hard to claim that US companies have access to data outside each company competing to evolve AI solutions fastest. It’s more secretive: China protects its advantage by limiting other countries’ access to its findings / information related to AI. At the same time, China takes advantage of the open publication of cutting edge ideas generated by scientists in other areas of the world. How China is doubling down on their natural advantage in AI solution development A number of metrics show China’s growing advantage in the area. China is investing more money in the area and leading the world in the number of university led research papers on AI that they’re publishing. China is investing more money in AI than the USA. They overtook the US in AI funds allocation in 2015 and have been increasing investment in the area since. Source: Wall Street Journal China now performs more research in to AI than the US – as measured by the number of published scientific peer reviewed journals. Source: HBR Why ‘Network Effects’ will decide the ultimate winner in the AI Arms Race You won’t see evidence of a Cold War in the behaviors of World Leaders. The handshakes are firm and the visits are cordial. Everybody smiles when they meet at the G8. However, a look behind the curtain clearly shows a 21st Century arms race underway, being led by investments  related to AI in both countries. Network effects ensure that there is often only one winner in a fight for technological supremacy. Whoever has the ‘best product’ for a given application, wins the most users. The data obtained from those users’ interactions with the tool is used to hone its performance. Thus creating a virtuous circle. The result is evident in almost every sphere of tech: Network effects explain why most people use only Google, why there’s only one Facebook and how Netflix has overtaken cable TV in the US as the primary source of video entertainment. Ultimately, there is likely to be only one winner in the war surrounding AI, too. From a military perspective, the advantage China has in its starting point for AI solution development could be the deciding factor. As we’ve seen, China has more people, with more devices, generating more data. That is likely to help the country develop workable AI solutions faster. They ingest the hard won advantages that US data scientists develop and share – but do not share their own. Finally, they simply outspend and out-research the US, investing more in AI than any other country. China’s coordinated approach outpaces the US’s market based solution with every step. The country with the best AI solutions for each application will gain a ‘Winner Takes All’ advantage and the winning hand in the $300 billion game of AI market ownership. We must change how we think about AI, urge AI founding fathers Does AI deserve to be so Overhyped? Alarming ways governments are using surveillance tech to watch you    
Read more
  • 0
  • 0
  • 19768

article-image-shoshana-zuboff-on-21st-century-solutions-for-tackling-the-unique-complexities-of-surveillance-capitalism
Savia Lobo
05 Jun 2019
4 min read
Save for later

Shoshana Zuboff on 21st century solutions for tackling the unique complexities of surveillance capitalism

Savia Lobo
05 Jun 2019
4 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday, May 27 to Wednesday, May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Shoshana Zuboff’s take on how to tackle the complexities of surveillance capitalism. She has also provided 21st-century solutions to help tackle the same. Shoshana Zuboff, Author of 'The Age of Surveillance Capitalism', talks about economic imperatives within surveillance capitalism. Zuboff says that the unilateral claiming of private human experience, its translation into behavioral data. These predictions are sold in a new kind of marketplace that trades exclusively in human futures. When we deconstruct the competitive dynamics of these markets we get to understand what the new imperatives are, which are, Scale: as they need a lot of data in order to make good predictions economies of scale; secondly, scope: they need a variety of data to make good predictions. She shared a brief quote from a data scientist, which says, “We can engineer the context around a particular behavior and force change. That way we are learning how to rate the music and then we let the music make them dance.” This behavioral modification is systemically institutionalized on a global scale and mediated by a now ubiquitous digital infrastructure. She further explains the kind of law and regulation needed today will be 21st century solutions aimed at the unique 21st century complexities of surveillance capitalism. She mentioned three arenas in which legislative and regulatory strategies can effectively align with the structure and consequences of surveillance capitalism briefly: We need lawmakers to devise strategies that interrupt and in many cases outlaw surveillance capitalism's foundational mechanisms. This includes the unilateral taking of private human experience as a free source of raw material and its translation into data. It includes the extreme information asymmetries necessary for predicting human behavior. It includes the manufacture of computational prediction products based on the unilateral and secret capture of human experience. It includes the operation of prediction markets that trade in human futures. From the point of view of supply and demand, surveillance capitalism can be understood as a market failure. Every piece of research over the last decades has shown that when users are informed of the backstage operations of surveillance capitalism they want no part of it, they want protection, they reject it, they want alternatives. We need laws and regulatory frameworks designed to advantage companies that want to break with the surveillance capitalist paradigm. Forging an alternative trajectory to the digital future will require alliances of new competitors who can summon and institutionalize an alternative ecosystem. True competitors that align themselves with the actual needs of people and the norms of market democracy are likely to attract just about every person on earth as their customers. Lawmakers will need to support new forms of citizen action, collective action just as nearly a century ago workers won legal protection for their rights to organize to bargain and to and to strike. New forms of citizen solidarity are already emerging in municipalities that seek an alternative to the Google-owned Smart City future. In communities that want to resist the social cost of so-called disruption imposed for the sake of others gained and among workers who seek fair wages and reasonable security in the precarious conditions of the so-called gig economy. She says, “Citizens need your help but you need citizens because ultimately they will be the wind behind your wings, they will be the sea change in public opinion and public awareness that supports your political initiatives.” “If together we aim to shift the trajectory of the digital future back toward its emancipatory promise, we resurrect the possibility that the future can be a place that all of us might call home,” she concludes. To know more you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. WWDC 2019 highlights: Apple introduces SwiftUI, new privacy-focused sign in, updates to iOS, macOS, and iPad and more Experts present most pressing issues facing global lawmakers on citizens’ privacy, democracy and rights to freedom of speech Apple previews iOS 13: Sign in with Apple, dark mode, advanced photo and camera features, and an all-new Maps experience  
Read more
  • 0
  • 0
  • 19764
article-image-neurips-invited-talk-reproducible-reusable-and-robust-reinforcement-learning
Prasad Ramesh
25 Feb 2019
6 min read
Save for later

NeurIPS Invited Talk: Reproducible, Reusable, and Robust Reinforcement Learning

Prasad Ramesh
25 Feb 2019
6 min read
On the second day of NeurIPS conference held in Montreal, Canada last year, Dr. Joelle Pineau presented a talk on reproducibility in reinforcement learning. She is an Associate Professor at McGill University and Research Scientist for Facebook, Montreal, and the talk is ‘Reproducible, Reusable, and Robust Reinforcement Learning’. Reproducibility and crisis Dr. Pineau starts by stating a quote from Bollen et. al in National Science Foundation: “Reproducibility refers to the ability of a researcher to duplicate the results of a prior study, using the same materials as were used by the original investigator. Reproducibility is a minimum necessary condition for a finding to be believable and informative.” Reproducibility is not a new concept and has appeared across various fields. In a 2016 The Nature journal survey of 1576 scientists, 52% said that there is a significant reproducibility crisis, 38% agreed to a slight crisis. Reinforcement learning is a very general framework for decision making. About 20,000 papers are published in this area alone in 2018 and the year is not even over yet, compared to just about 2,000 papers in the year 2000. The focus of the talk is a class of reinforcement learning that has gotten the most attention and has shown a lot of promise for practical applications—policy gradients. In this method, the idea is that the policy/strategy is learned as a function and this function can be represented by a neural network. Pineau picks four research papers in the class of policy gradients that come across literature most often. They use the Mujocu simulator to compare the four algorithms. It is not important to know which algorithm is which but the approach to empirically compare these algorithms is the intention. The results were different in different environments (Hopper, Swimmer) but the variance was also drastically different for an algorithm. Even on using different code and policies the results were very different for a given algorithm in different environments. It was observed that people writing papers may not be always motivated to find the best possible hyperparameters and very often use the default hyperparameters. On using the best hyperparameters possible for two algorithms compared fairly, the results were pretty clean, distinguishable. Where n=5, five different random seeds. Picking n influences the size of the confidence interval (CI). n=5 here as most papers used 5 trials at the most. Some people were also run “n” runs where n was not specified and would report the top 5 results. It is a good way to show good results but there’s a strong positive bias, the variance appears to be small. Source: NeurIPS website Some people argue that the field of reinforcement learning is broken. Pineau stresses that this is not her message and notes that sometimes fair comparisons don’t have to give the cleanest results. Different methods may have a very distinct set of hyperparameters in number, value, and variable sensitivity. Most importantly the best method to choose heavily depends on the data and computation budget you can spare. An important point to get the said reproducibility when using algorithms to your problem. Pineau and her team surveyed 50 RL papers from 2018 and found that significance testing was applied only on 5% of the papers. Graphs and shading is seen in many papers but without information on what the shading area is, confidence interval or standard deviation cannot be known. Pineau says: “Shading is good but shading is not knowledge unless you define it properly.” A reproducibility checklist For people publishing papers Pineau presents a checklist created in consultation with her colleagues. It says for algorithms the things included should be a clear description, an analysis of complexity, and a link to source code and dependencies. For theoretical claims, a statement of the result, a clear explanation of any assumptions, and a complete proof of the claim should be included. There are also other items presented in the checklist for figures and tables. Here is the complete checklist: Source: NeurIPS website Role of infrastructure on reproducibility People can think that since the experiments are run on computers results will be more predictable than those of other sciences. But even in hardware, there is room for variability. Hence, specifying it can be useful. For example the properties of CUDA operations. On some myths “Reinforcement Learning is the only case of ML where it is acceptable to test on your training set.” Do you have to train and test on the same task? Pineau says that you really don’t have to after presenting three examples. The first one is where the agent moves around in four directions on an image then identifies what the image is, on higher n, the variance is greatly reduced. The second one is of an Atari game where the black background is replaced with videos which are a source of noise, a better representation of the real world as compared to a simulated limited environment where external real-world factors are not present. She then talks about multi-task RL in photorealistic simulators to incorporate noise. The simulator is an emulator built from images videos taken from real homes. Environments created are completely photorealistic but have properties of the real world, for example, mirror reflection. Working in the real world is very different than a limited simulation. For one, a lot more data is required to represent the real world as compared to a simulation. The talk ends with a message that science is not a competitive sport but is a collective institution that aims to understand and explain. There is an ICLR reproducibility challenge where you can join. The goal is to get community members to try and reproduce the empirical results presented in a paper, it is on an open review basis. Last year, 80% changed their paper with the feedback given by contributors who tested a given paper. Head over to NeurIPS facebook page for the entire lecture and other sessions from the conference. How NeurIPS 2018 is taking on its diversity and inclusion challenges NeurIPS 2018: Rethinking transparency and accountability in machine learning Researchers unveil a new algorithm that allows analyzing high-dimensional data sets more effectively, at NeurIPS conference
Read more
  • 0
  • 0
  • 19612

article-image-10-to-dos-for-industrial-internet-architects
Aaron Lazar
24 Jan 2018
4 min read
Save for later

10 To-dos for Industrial Internet Architects

Aaron Lazar
24 Jan 2018
4 min read
[box type="note" align="" class="" width=""]This is a guest post by Robert Stackowiak, a technology business strategist at the Microsoft Technology Center. Robert has co-authored the book Architecting the Industrial Internet with Shyam Nath who is the director of technology integrations for Industrial IoT at GE Digital. You may also check out our interview with Shyam for expert insights into the world of IIoT, Big Data, Artificial Intelligence and more.[/box] Just about every day, one can pick up a technology journal or view an on-line technology article about what is new in the Industrial Internet of Things (IIoT). These articles usually provide insight into IIoT solutions to business problems or how a specific technology component is evolving to provide a function that is needed. Various industry consortia, such as the Industrial Internet Consortium (IIC), provides extremely useful documentation in defining key aspects of the IIoT architecture that the architect must consider. These broad reference architecture patterns have also begun to consistently include specific technologies and common components. The authors of the title Architecting the Industrial Internet felt the time was right for a practical guide for architects.The book provides guidance on how to define and apply an IIoT architecture in a typical project today by describing architecture patterns. In this article, we explore ten to-dos for Industrial Internet Architects designing these solutions. Just as technology components are showing up in common architecture patterns, their justification and use cases are also being discovered through repeatable processes. The sponsorship and requirements for these projects are almost always driven by leaders in the line of business in a company. Techniques for uncovering these projects can be replicated as architects gain needed discovery skills. Industrial Internet Architects To-dos: Understand IIoT: Architects first will seek to gain an understanding of what is different about the Industrial Internet, the evolution to specific IIoT solutions, and how legacy technology footprints might fit into that architecture. Understand IIoT project scope and requirements: They next research guidance from industry consortia and gather functional viewpoints. This helps to better understand the requirements their architecture must deliver solutions to, and the scope of effort they will face. Act as a bridge between business and technical requirements: They quickly come to realize that since successful projects are driven by responding to business requirements, the architect must bridge the line of business and IT divide present in many companies. They are always on the lookout for requirements and means to justify these projects. Narrow down viable IIoT solutions: Once the requirements are gathered and a potential project appears to be justifiable, requirements and functional viewpoints are aligned in preparation for defining a solution. Evaluate IIoT architectures and solution delivery models: Time to market of a proposed Industrial Internet solution is often critical to business sponsors. Most architecture evaluations include consideration of applications or pseudo-applications that can be modified to deliver the needed solution in a timely manner. Have a good grasp of IIoT analytics: Intelligence delivered by these solutions is usually linked to the timely analysis of data streams and care is taken in defining Lambda architectures (or Lambda variations) including machine learning and data management components and where analysis and response must occur. Evaluate deployment options: Technology deployment options are explored including the capabilities of proposed devices, networks, and cloud or on-premises backend infrastructures. Assess IIoT Security considerations: Security is top of mind today and proper design includes not only securing the backend infrastructure, but also extends to securing networks and the edge devices themselves. Conform to Governance and compliance policies: The viability of the Industrial Internet solution can be determined by whether proper governance is put into place and whether compliance standards can be met. Keep up with the IIoT landscape: While relying on current best practices, the architect must keep an eye on the future evaluating emerging architecture patterns and solutions. [author title="Author’s Bio" image="http://"]Robert Stackowiak is a technology business strategist at the Microsoft Technology Center in Chicago where he gathers business and technical requirements during client briefings and defines Internet of Things and analytics architecture solutions, including those that reside in the Microsoft Azure cloud. He joined Microsoft in 2016 after a 20-year stint at Oracle where he was Executive Director of Big Data in North America. Robert has spoken at industry conferences around the world and co-authored many books on analytics and data management including Big Data and the Internet of Things: Enterprise Architecture for A New Age, published by Apress, five editions of Oracle Essentials, published by O'Reilly Media, Oracle Big Data Handbook, published by Oracle Press, Achieving Extreme Performance with Oracle Exadata, published by Oracle Press, and Oracle Data Warehousing and Business Intelligence Solutions, published by Wiley. You can follow him on Twitter at @rstackow. [/author]  
Read more
  • 0
  • 0
  • 19581
Modal Close icon
Modal Close icon