Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-generative-adversarial-networks-gans-next-milestone-deep-learning
Savia Lobo
09 Nov 2017
7 min read
Save for later

Generative Adversarial Networks (GANs): The next milestone In Deep Learning

Savia Lobo
09 Nov 2017
7 min read
With the rise in popularity of deep learning as a concept and a paradigm, neural networks are captivating the interest of machine learning enthusiasts and developers alike, by being able to replicate the human brain for efficient predictions, image recognition, text recognition, and much more. However, can these neural networks do something more, or are they just limited to predictions? Can they self-generate new data by learning from a training dataset? Generative Adversarial networks (GANs) are here, to answer all these questions. So, what are GANs all about? Generative Adversarial Networks follow unsupervised machine learning, unlike traditional neural networks. When a neural network is taught to identify a bird, it is fed with a huge number of images including birds, as training data. Each picture is labeled before it is put to use in training the models. This labeling of data is both costly and time-consuming. So, how can you train your neural networks by giving it less data to train on? GANs are of a great help here. They cast out an easy way to train the DL algorithms by slashing out the amount of data required to train the neural network models, that too, with no labeling of data required. The architecture of a GAN includes a generative network model(G), which produces fake images or texts, and an adversarial network model--also known as the discriminator model (D)--that distinguishes between the real and the fake productions by comparing the content sent by the generator with the training data it has. Both of these are trained separately by feeding each of them with training data and a competitive goal. Source: Learning Generative Adversarial Networks GANs in action GANs were introduced by Ian Goodfellow, an AI researcher at Google Brain. He compares the generator and the discriminator models with a counterfeiter and a police officer. “You can think of this being like a competition between counterfeiters and the police,” Goodfellow said. “Counterfeiters want to make fake money and have it look real, and the police want to look at any particular bill and determine if it’s fake.” Both the discriminator and the generator are trained simultaneously to create a powerful GAN architecture. Let’s peek into how a GAN model is trained- Specify the problem statement and state the type of manipulation that the GAN model is expected to carry out. Collect data based on the problem statement. For instance, for image manipulation, a lot of images are required to be collected to feed in. The discriminator is fed with an image; one from the training set and one produced by the generator The discriminator can be termed as ‘successfully trained’ if it returns 1 for the real image and 0 for the fake image. The goal of the generator is to successfully fool the discriminator and getting the output as 1 for each of its generated image. In the beginning of the training, the discriminator loss--the ability to differentiate real and fake image or data--is minimal. As the training advances, the generator loss decreases and the discriminator loss increases, This means, the generator is now able to generate real images. Real world applications of GANs The basic application of GANs can be seen in generating photo-realistic images. But there is more to what GANs can do. Some of the instances where GANs are majorly put to use include: Image Synthesis Image Synthesis is one of the primary use cases of GANs. Here, multilayer perceptron models are used in both the generator and the discriminator to generate photo-realistic images based on the training dataset of the images. Text-to-image synthesis Generative Adversarial networks can also be utilized for text-to-image synthesis. An example of this is in generating a photo-realistic image based on a caption. To do this, a dataset of images with their associated captions are given as training data. The dataset is first encoded using a hybrid neural network called the character-level convolutional Recurrent Neural network, which creates a joint representation of both in multimodal space for both the generator and the discriminator. Both Generator and Discriminator are then trained based on this encoded data. Image Inpainting Images that have missing parts or have too much of noise are given as an input to the generator which produces a near to real image. For instance, using TensorFlow framework, DCGANs (Deep Convolutional GANs), can generate a complete image from a broken image. DCGANs are a class of CNNs that stabilizes GANs for efficient usage. Video generation Static images can be transformed into short scenes with plausible motions using GANs. These GANs use scene dynamics in order to add motion to static images. The videos generated by these models are not real but illusions. Drug discovery Unlike text and image manipulation, Insilico medicine uses GANs to generate an artificially intelligent drug discovery mechanism. To do this, the generator is trained to predict a drug for a disease which was previously incurable.The task of the discriminator is to determine whether the drug actually cures the disease. Challenges in training a GAN Whenever a competition is laid out, there has to be a distinct winner. In GANs, there are two models competing against each other. Hence, there can be difficulties in training them. Here are some challenges faced while training GANs: Fair training: While training both the models, precaution has to be taken that the discriminator does not overpower the generator. If it does, the generator would fail to train effectively. On the other hand, if the discriminator is lenient, it would allow any illegitimate content to be generated. Failure to understand the number of objects and the dimensions of objects, present in a particular image. This usually occurs during the initial learning phase. For instance, GANs, at times output an image which ends up having more than two eyes, which is not normal in the real world. Sometimes, it may present a 3D image like a 2D one. This is because they cannot differentiate between the two. Failure to understand the holistic structure: GANs lack in identifying universally correct images. It may generate an image which can be totally opposed to how they look in real. For instance, a cat having an elongated body shape, or a cow standing on its hind legs, etc. Mode collapse is another challenge, which occurs when a low variation dataset is processed by a GANs. Real world includes complex and multimodal distributions, where data may have different concentrated sub-groups. The problem here is, the generator would be able to yield images based on anyone sub-group resulting in an inaccurate output. Thus, causing a mode collapse. To tackle these and other challenges that arise while training GANs, researchers have come up with DCGANs (Deep Convolutional GANs), WassersteinGANs, CycleGANs to ensure fair training, enhance accuracy, and reduce the training time. AdaGANs are implemented to eliminate mode collapse problem. Conclusion Although the adoption of GANs is not as widespread as one might imagine, there’s no doubt that they could change the way unsupervised machine learning is used today. It is not too far-fetched to think that their implementation in the future could find practical applications in not just image or text processing, but also in domains such as cryptography and cybersecurity. Innovations in developing newer GAN models with improved accuracy and lesser training time is the key here - but it is something surely worth keeping an eye on.
Read more
  • 0
  • 0
  • 20636

article-image-sally-hubbard-on-why-tech-monopolies-are-bad-for-everyone-amazon-google-and-facebook-in-focus
Natasha Mathur
24 Nov 2018
8 min read
Save for later

Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus

Natasha Mathur
24 Nov 2018
8 min read
When people talk about tech giants such as Amazon, Facebook, and Google, they usually talk about the great and powerful innovations that they’ve brought to the table, that have perpetually transformed the contemporary world. Of late, criticism of these same tech titans holding back the power of innovation from other smaller companies as they have become, what you may call, a tech monopoly has been gain traction. In a podcast episode of Innovation For All, Sheana Ahlqvist talked to Sally Hubbard, an antitrust expert, and investigative journalist at The Capitol Forum, regarding tech giants building monopolies. Here are some key highlights from the podcast.   Let’s recall the definition of monopoly. “A market structure characterized by a single seller, selling a unique product in the market. In a monopoly market, the seller faces no competition, as he is the sole seller of goods with no close substitute. Monopoly market makes the single seller the market controller as well as the price maker. He enjoys the power of setting the price for his goods”. In a nutshell, decrease the prices of your service and drive everyone else out of the business. A popular example is John D Rockefeller, Standard Oil’s chief executive, who ruined other competitors by cutting the prices of the oil until they went bankrupt, immediately after which the higher prices returned. Now although there is no price-fixing in the case of Google or Facebook since they offer completely free services, they’re still a monopoly. Let’s have a look. How are Amazon, Google, and Facebook tech monopolies? If you look at each one of these organizations - Amazon, Facebook, and Google have carved out their own markets, with gargantuan and durable market power vested in the hands of each one of them. According to the US Department of Justice, a market share of greater than 50% has been necessary for courts to find the existence of monopolistic power. A dominant market share is a useful starting point in determining monopoly power. Going by this rule, Google has dominated the search engine market, maintaining an 86.02 % market share as of July 2018, as per Statista. This is way over 50%, making Google a monopoly. The majority of Google revenues are generated through advertising. Similarly, Facebook dominates the social media market, with its worldwide market share of 66.67%, making it a monopoly too. Amazon, on the other hand, has 41% market share in the e-commerce retail market which is expected to increase significantly to 50% of the entire e-commerce retail market’s GMV, by 2021. This brings it pretty close to being a monopoly soon in the e-commerce market soon. Another factor that is considered under the Sherman Act, a part of the antitrust law, when identifying a firm that possesses monopoly power, is the existence of anti-competitive effect i.e. companies trying to maintain or acquire a dominant position by excluding competitors or preventing new entry. One recent example that comes to mind is when Google was fined with $5 billion in July this year for breaching EU’s antitrust laws. Google was fined for 3 types of illegal restrictions on the use of Android, cementing the dominance of its search engine. As per EU, Google denied its rivals a chance to innovate and compete on merits, which is illegal under EU’s antitrust laws. Also Read: A quick look at E.U.’s antitrust case against Google’s Android Monopolies and Social Injustice Hubbard points out how these tech titans don’t have any major competitors or substitutes, and even if you don’t pay most of these tech giants with money, you pay them with your data. This is more than enough for world domination, which is always an underlying aspiration for tech companies as they strive to be “the one” in the eyes of their customers, by carefully leveraging their data. This data also put these companies at an unfair advantage over other smaller and newer businesses. As Clive Humby, a British mathematician rightly said, “data is the new oil” in the digital economy. Hubbard explains how the downsides of this monopoly might not be visible to the consumer but affects entrepreneurs and small businesses who are greatly harmed by the practices of these companies. Taking Amazon, for instance, no one wishes to be dependent on their competitor, however, since Amazon has integrated the service of selling products on its platform, not only is everyone competing against Amazon but are also dependent on Amazon, as it is Amazon who decides the rules for the sellers. Add to this the fact that Amazon comprises a ginormous amount of consumer data in hand, putting it in an unfair advantage over others as it can dominate its products over others. There is currently an ongoing EU investigation into Amazon’s use of consumer and seller data collected on its platform to better its own products sold on its platform. Similarly, Google’s monopoly is evident in the fact that it gets to decide the losers and winners of the internet on its Google search, prioritizing its products over the others. An example of this is Google getting fined with 2.7 billion dollars by EU, last year after it ruled the company had abused its power by promoting its own shopping comparison service at the top of search results. Facebook, on the other hand, doesn’t have a direct competition, leaving users with less choice in terms of Social network sites, making it a monopoly. Add to that the fact that other major social media platforms such as Instagram and Whatsapp are also owned by Facebook. Hubbard explains how Facebook doesn't have competition, so it can prioritize its profits over the other factors such as user data as it's really not concerned about user loss. This is evident in the number of scandals that Facebook has gotten itself into regarding user data.  Facebook is facing a whole lot of data and privacy-related controversies, Cambridge Analytica scandal being the most popular one. Facebook suffered the largest security breach in its history that left 50M user accounts compromised, last month. Department of Housing and Urban Development UD) filed a complaint against Facebook in August, alleging the platform is selling ads that discriminate against users based on race, religion, and sexuality. ACLU also sued Facebook in September for enabling sex and age discrimination through targeted ads. Last week, the New York Times published a bombshell report on how Facebook has been following the strategy of ‘delaying, denying and deflecting’ the blame under the leadership of Sheryl Sandberg for all the controversies surrounding it. Scandals aside, even if a user finds the content hosted by Facebook displeasing, they don’t really have a choice to “stop using Facebook” as their friends and family continue to use the platform to stay in touch. Also, Facebook charges advertisers depending on how many people see a message instead of being based on ad clicks. This is why Facebook’s algorithm is programmed in a way that it prioritizes more engaging branded content and ads over the others. Monopoly and Gender Inequality As the market power of these tech giants increases, so does their wealth. Hubbard points out that the wealth from the many among the working and middle classes get transferred to the few belonging to the 1% and 0.1% at the top of the income and wealth distribution. The concentration of market power hurts workers and results in depresses wages, affecting women and other minority workers the most. “When general wages go down or stagnate, female workers are even worse off. Women make 78 cents to a man’s dollar, with black women making 64 cents and Latina women making 54 cents for every dollar a white man makes. As wages by the bottom 99% of earners continue to shrink, women get paid a mere percentage of fewer dollars. And the top 1% of earners are predominantly men”, mentions Sally Hubbard. There have also been declines in employee mobility as there are lesser firms competing due to giant firms acquiring smaller firms. This leads to reduced bargaining power in the hands of an employee. Moreover, these firms also t impose non-compete clauses and no-poach agreements putting a damper on workers’ ability to switch jobs. As eloquently put by Hubbard, “these tech platforms are the ones controlling the rules of the arena in which the game is played and are also the ones playing the game”. Taking into consideration this analogy, it’s anyone’s guess who’ll win the game. OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? How far will Facebook go to fix what it broke: Democracy, Trust, Reality Amazon splits HQ2 between New York and Washington, D.C. after a making 200+ states compete over a year; public sentiments largely negative
Read more
  • 0
  • 0
  • 20633

article-image-elon-musks-tiny-submarine-is-a-lesson-in-how-not-to-solve-problems-in-tech
Richard Gall
11 Jul 2018
6 min read
Save for later

Elon Musk's tiny submarine is a lesson in how not to solve problems in tech

Richard Gall
11 Jul 2018
6 min read
Over the last couple of weeks the world has been watching on as rescuers attempted to find, and then save, a young football team from Tham Luang caves in Thailand. Owing to a remarkable coordinated effort, and a lot of bravery from the team (including one diver who died), all 12 boys were brought back to safety. Tech played a big part in the rescue mission too - from drones to subterranean radios. But it wanted to play a bigger role - or at least Elon Musk wanted it to. Musk and his submarine has been a somewhat bizarre subplot to this story, and while you can't fault someone for offering to help out in a crisis, you might even say it was unnecessary. Put simply, Elon Musk's involvement in this story is a fable about the worst aspects of tech-solutionism. It offers an important lesson for anyone working in tech how not to solve problems. Bringing a tiny submarine to a complex rescue mission that requires coordination between a number of different agencies, often operating from different countries is a bit like telling someone to use Angular to build their first eCommerce store. It's like building an operating system from scratch because your computer has crashed. Basically, you just don't need it. There are better and more appropriate solutions - like Shopify or WooCommerce, or maybe just rebooting your system. Lesson 1: Don't insert yourself in problems if you're not needed Elon Musk first offered his support to the rescue mission in Thailand on July 4. It was a response to one of his followers. https://twitter.com/elonmusk/status/1014509856777293825 Musk's first instincts were measures, saying that he suspects 'the Thai government has got this under control' but it didn't take long for his mind to change. Without any specific invitation or coordination with the parties leading the rescue mission, Musk's instincts to innovate and create kicked in. This sort of situation is probably familiar to anyone who works in tech - or, for that matter, anyone who has ever had a job. Perhaps you're the sort of person who hears about a problem and your immediate instinct is to fix it. Or perhaps you've been working on a project, someone hears about it, and immediately they're trying to solve all the problems you've been working on for weeks or months. Yes, sometimes it's appealing, but on the other side it can be incredibly annoying and disruptive. This is particularly true in software engineering where you're trying to solve problems at every level - from strategy to code. There's rarely a single solution. There's always going to be a difference of opinion. At some point we need to respect boundaries and allow the right people to get on with the job. Lesson 2: Listen to the people involved and think carefully about the problem you're trying to solve One of the biggest challenges in problem solving is properly understanding the problem. It's easy to think you've got a solution after a short conversation about a problem but there may be nuances you've missed or complexities that aren't immediately clear. Humility can be a very valuable quality when problem solving. It allows everyone involved to think clearly about the task at hand; it opens up space for better solutions. As the old adage goes, when every problem looks like a nail, every solution looks like a hammer. For Musk, when a problem looks like kids stuck in an underwater cave, the solution looks like a kid-sized submarine. Never mind that experts in Thailand explained that the submarine would not be 'practical.' For Musk, a solution is a solution. "Although his technology is good and sophisticated it’s not practical for this mission" said Narongsak Osatanakorn, one of the leaders of the rescue mission, speaking to the BBC and The Guardian. https://twitter.com/elonmusk/status/1016110809662066688 Okay, so perhaps that's a bit of a facetious example - but it is a problem we can run into, especially if we work in software. Sometimes you don't need to build a shiny new SPA - your multi-page site might be just fine for its purpose. And maybe you don't need to deploy on containers - good old virtual machines might do the job for you. In these sort of instances it's critical to think about the problem at hand. To do that well you also need to think about the wider context around it - what infrastructure is already there? If we change something, is that going to have a big impact on how it's maintained in the future? In many ways, the lesson here recalls the argument put forward by the Boring Software Manifesto in June. In it, the writer argued in favor of things that are 'simple and proven' over software that is 'hyped and volatile'. Lesson 3: Don't take it personally if people decline your solutions Problem solving is a collaborative effort, as we've seen. Offering up solutions is great - but it's not so great when you react badly to rejection. https://twitter.com/elonmusk/status/1016731812159254529 Hopefully, this doesn't happen too much in the workplace - but when your job is to provide solutions, it doesn't help anyone to bring your ego into it. In fact, it indicates selfish motives behind your creative thinking. This link between talent, status and ego has been developing for some time now in the tech world. Arguably Elon Musk is part of a trend of engineers - ninjas, gurus, wizards, whatever label you want to place on yourself - for whom problem-solving is as much an exercise in personal branding as it is actually about solving problems. This trend is damaging for everyone - it not only undermines people's ability to be creative, it transforms everyone's lives into a rat race for status and authority. That's not only sad, but also going to make it hard to solve real problems. Lesson 4: Sometimes collaboration can be more inspiring than Elon Musk Finally, let's think about the key takeaway here: everyone in that cave was saved. And this wasn't down to some miraculous invention. It was down to a combination of tools - some of them even pretty old. It wasn't down to one genius piece of engineering, but instead a combination of creative thinking and coordinated problem solving that used the resources available to bring a shocking story to a positive conclusion. Working in tech isn't always going to be a matter of life and death - but it's the collaborative and open world we want to work in, right?
Read more
  • 0
  • 2
  • 20550

article-image-voice-natural-language-and-conversations-are-they-the-next-web-ui
Sugandha Lahoti
08 Jun 2018
5 min read
Save for later

Voice, natural language, and conversations: Are they the next web UI?

Sugandha Lahoti
08 Jun 2018
5 min read
Take any major conference that happened this year, Google I/O, Apple’s WWDC, or Microsoft Build. A major focus of all these conferences by top-notch tech leaders is improving User experience, smoothing out the process of how a user experiences their products. In present times, the user experience is heavily dependent on how a system interacts with a human. It may be either through responsive web designs, or appealing architectures. It may also be through an interactive module, such as a conversational UI, a chatbot, or a voice interface—essentially the same thing albeit with slight changes in their definition. Irrespective of they are called, these UX models have one fixed goal: to improve the interaction between a human, and a system, such that it feels real. In our recently conducted Packt Skill-up survey 2018, we asked developers and tech pros about whether Conversational User Interfaces and chatbots are going to be the future for web UI? Well, it seems yes, as over 65% of respondents, agreed that chat interactions and Conversational User Interfaces are the future of the web. After the recent preview of the power of Google Duplex, those numbers might be even higher if asked again today. Why has this paradigm of interacting with the web shifted from text and even visual searches on mobile to Voice, Natural language, and conversation UI? Why is Apple’s Siri, Google’s Voice assistant, Microsoft’s Cortana, Amazon Echo, releasing new versions every day? Computing power & NLP, the two pillars Any chatbot, or voice interface, requires two major factors to make them successful. One being computational power, which makes a conversational UI process complex calculations. And natural language processing, which actually makes a chatbot conduct human-like conversations. Both these areas have made tremendous progress in the recent times. A large number of computational chips namely GPUs, TPUs, as well as quantum computers, are being developed, which are capable of processing complex calculations in a jiffy. NLP has also gained momentum both in speech recognition capabilities (understanding language) and artificial intelligence (learning from experience). As technology in these areas expands, it paves way for companies to adopt conversational UIs as their main user interface. The last thing we need is more apps There are already a very large number of apps (read millions) available in app stores and they are increasing every day. We are almost at the peak of the hype cycle. And there is only downfall from here. Why? Well, I’m sure, you’ll agree, downloading, setting up, and managing an app is a hassle, not to mention, humans have limited attention spans, so switching between multiple apps happens quite often. Conversational UIs are rapidly taking up the vacuum left behind by mobile apps. They integrate functionalities of multiple apps in one. So you have a simple messaging app, which can also book cabs, search, and shop or order food. Moreover, they can simplify routine tasks. AI enabled chatbots, can remind you of scheduled meetings, bring up the news for you every morning, analyze your refrigerator for food items to be replenished, and update your shopping cart, all with simple commands. Advancements in deep learning have also produced, what are known as therapist bots. Users can confide in bots just as they do with human friends when they have a broken heart, have lost a job, or have been feeling down. (This view does assume that the service provider respects the users’ privacy and adheres to strict policies related to data privacy and protection.) The end of screen-based interfaces Another flavor of Conversational UI is the Voice User interfaces (VUI). Typically, we interact with a device directly through a touchscreen or indirectly with a remote control. However, VUI is the touch-less version of technology where you only need to think aloud with your voice. These interfaces can work solo, like Amazon Echo, or Google Home or be combined with text-based chatbots, like Apple Siri, Google voice assistant etc. You simply need to say a command or type it, and the task is done. “Siri, Text Robert, I’m running late for the meeting.” And boy! Are voice user interfaces growing rapidly! Google Duplex, announced at Google I/O 2018, can even make phone calls for the users imitating human natural conversation almost perfectly. In Fact, it also adds pause-fillers and phrases such as “um”, “uh-huh “, and “erm” to make the conversation sound as natural as possible. Voice interfaces also work amazingly for people with disabilities including Visual imparities. Users, who are unable to use screens and keyboards, can use VUI for their dat-to-day tasks. Here’s a touching review of Amazon Echo shared by a wheelchair-bound user about how the device changed his life. The world is being swept over by the wave of Conversational UI, Google duplex being the latest example. As AI deepens its roots, across the technology ecosystem, intelligent assistant applications like, Siri, Duplex, Cortana will advance. This boom will push us closer to Zero UI, a seamless and interactive UI which eradicates the barrier between user and device. Top 4 chatbot development frameworks for developers How to create a conversational assistant or chatbot using Python Building a two-way interactive chatbot with Twilio: A step-by-step guide
Read more
  • 0
  • 0
  • 20525

article-image-iterative-machine-learning-step-towards-model-accuracy
Amarabha Banerjee
01 Dec 2017
10 min read
Save for later

Iterative Machine Learning: A step towards Model Accuracy

Amarabha Banerjee
01 Dec 2017
10 min read
Learning something by rote i.e., repeating it many times, perfecting a skill by practising it over and over again or building something by making minor adjustments progressively to a prototype are things that comes to us naturally as human beings. Machines can also learn this way and this is called ‘Iterative machine learning’. In most cases, iteration is an efficient learning approach that helps reach the desired end results faster and accurately without becoming a resource crunch nightmare. Now, you might wonder, isn’t iteration inherently part of any kind of machine learning? In other words, modern day machine learning techniques across the spectrum from basic regression analysis, decision trees, Bayesian networks, to advanced neural nets and deep learning algorithms have some inherent iterative component built into them. What is the need, then, for discussing iterative learning as a standalone topic? This is simply because introducing iteration externally to an algorithm can minimize the error margin and therefore help in accurate modelling.  How Iterative Learning works Let’s understand how iteration works by looking closely at what happens during a single iteration flow within a machine learning algorithm. A pre-processed training dataset is first introduced into the model. After processing and model building with the given data, the model is tested, and then the results are matched with the desired result/expected output. The feedback is then returned back to the system for the algorithm to further learn and fine tune its results. This clearly shows that two iteration processes take place here: Data Iteration - Inherent to the algorithm Model Training Iteration - Introduced externally Now, what if we did not feedback the results into the system i.e. did not allow the algorithm to learn iteratively but instead adopted a sequential approach? Would the algorithm work and would it provide the right results? Yes, the algorithm would definitely work. However, the quality of the results it produces is going to vary vastly based on a number of factors. The quality and quantity of the training dataset, the feature definition and extraction techniques employed, the robustness of the algorithm itself are among many other factors. Even if all of the above were done perfectly, there is still no guarantee that the results produced by a sequential approach will be highly accurate. In short, the results will neither be accurate nor reproducible. Iterative learning thus allows algorithms to improve model accuracy. Certain algorithms have iteration central to their design and can be scaled as per the data size. These algorithms are at the forefront of machine learning implementations because of their ability to perform faster and better. In the following sections we will discuss iteration in different sets of algorithms each from the three main machine learning approaches - supervised ML, unsupervised ML and reinforcement learning. The Boosting algorithms: Iteration in supervised ML The boosting algorithms, inherently iterative in nature, are a brilliant way to improve results by minimizing errors. They are primarily designed to reduce bias in results and transform a particular set of weak learning classifier algorithms to strong learners and to enable them to reduce errors. Some examples are: AdaBoost (Adaptive Boosting) Gradient Tree Boosting XGBoost How they work All boosting algorithms have a common classifiers which are iteratively modified to reach the desired result. Let’s take the example of finding cases of plagiarism in a certain article. The first classifier here would be to find a group of words that appear somewhere else or in another article which would result in a red flag. If we create 10 separate group of words and term them as classifiers 1 to 10, then our article will be checked on the basis of this classifier and any possible matches will be red flagged. But no red flags with these 10 classifiers would not mean a definite 100% original article. Thus, we would need to update the classifiers, create shorter groups perhaps based on the first pass and improve the accuracy with which the classifiers can find similarity with other articles. This iteration process in Boosting algorithms eventually leads us to a fairly high rate of accuracy. The reason being after each iteration, the classifiers are updated based on their performance. The ones which have close similarity with other content are updated and tweaked so that we can get a better match. This process of improving the algorithm inherently, is termed as boosting and is currently one of the most popular methods in Supervised Machine Learning. Strengths & weaknesses The obvious advantage of this approach is that it allows minimal errors in the final model as the iteration enables the model to correct itself every time there is an error. The downside is the higher processing time and the overall memory requirement for a large number of iterations. Another important aspect is that the error fed back to train the model is done externally, which means the supervisor has control over the model and how it modifies. This in turn has a downside that the model doesn’t learn to eliminate error on its own. Hence, the model is not reusable with another set of data. In other words, the model does not learn how to become error-free by itself and hence cannot be ported to another dataset as it would need to start the learning process from scratch. Artificial Neural Networks: Iteration in unsupervised ML Neural Networks have become the poster child for unsupervised machine learning because of their accuracy in predicting data models. Some well known neural networks are: Convolutional Neural Networks   Boltzmann Machines Recurrent Neural Networks Deep Neural Networks Memory Networks How they work Artificial neural networks are highly accurate in simulating data models mainly because of their iterative process of learning. But this process is different from the one we explored earlier for Boosting algorithms. Here the process is seamless and natural and in a way it paves the way for reinforcement learning in AI systems. Neural Networks consist of electronic networks simulating the way the human brain is works. Every network has an input and output node and in-between hidden layers that consist of algorithms. The input node is given the initial data set to perform a set of actions and each iteration creates a result that is output as a string of data. This output is then matched with the actual result dataset and the error is then fed back to the input node. This error then enables the algorithms to correct themselves and reach closer and closer to the actual dataset. This process is called training the Neural Networks and each iteration improve the accuracy. The key difference between the iteration performed here as compared to how it is performed by Boosting algorithms is that here we don’t have to update the classifiers manually, the algorithms change themselves based on the error feedback. Strengths & weaknesses The main advantage of this process is obviously the level of accuracy that it can achieve on its own. The model is also reusable because it learns the means to achieve accuracy and not just gives you a direct result. The flip side of this approach is that the models can go wrong heavily and deviate completely in a different direction. This is because the induced iteration takes its own course and doesn’t need human supervision. The facebook chat-bots deviating from their original goal and communicating within themselves in a language of their own is a case in point. But as is the saying, smart things come with their own baggage. It’s a risk we would have to be ready to tackle if we want to create more accurate models and smarter systems.    Reinforcement Learning Reinforcement learning is a interesting case of machine learning where the simple neural networks are connected and together they interact with the environment to learn from their mistakes and rewards. The iteration introduced here happens in a complex way. The iteration happens in the form of reward or punishment for arriving at the correct or wrong results respectively. After each interaction of this kind, the multilayered neural networks incorporate the feedback, and then recreate the models for better accuracy. The typical type of reward and punishment method somewhat puts it in a space where it is neither supervised nor unsupervised, but exhibits traits of both and also has the added advantage of producing more accurate results. The con here is that the models are complex by design. Multilayered neural networks are difficult to handle in case of multiple iterations because each layer might respond differently to a certain reward or punishment. As such it may create inner conflict that might lead to a stalled system - one that can’t decide which direction to move next. Some Practical Implementations of Iteration Many modern day machine learning platforms and frameworks have implemented the iteration process on their own to create better data models, Apache Spark and MapR are two such examples. The way the two implement iteration is technically different and they have their merits and limitations. Let’s look at MapReduce. It reads and writes data directly onto HDFS filesystem present on the disk. Note that for every iteration to be read and written from the disk needs significant time. This in a way creates a more robust and fault tolerant system but compromises on the speed. On the other hand, Apache Spark stores the data in memory (Resilient Distributed DataSet) i.e. in the RAM. As a result, each iteration takes much less time which enables Spark to perform lightning fast data processing. But the primary problem with the Spark way of doing iteration is that dynamic memory or RAM is much less reliable than disk memory to store iteration data and perform complex operations. Hence it’s much less fault tolerant that MapR.   Bringing it together To sum up the discussion, we can look at the process of iteration and its stages in implementing machine learning models roughly as follows: Parameter Iteration: This is the first and inherent stage of iteration for any algorithm. The parameters involved in a certain algorithm are run multiple times and the best fitting parameters for the model are finalized in this process. Data Iteration: Once the model parameters are finalized, the data is put into the system and the model is simulated. Multiple sets of data are put into the system to check the parameters’ effectiveness in bringing out the desired result. Hence, if data iteration stage suggests that some of the parameters are not well suited for the model, then they are taken back to the parameter iteration stage and parameters are added or modified. Model Iteration: After the initial parameters and data sets are finalized, the model testing/ training happens. The iteration in model testing phase is all about running the same model simulation multiple times with the same parameters and data set, and then checking the amount of error, if the error varies significantly in every iteration, then there is something wrong with either the data or the parameter or both. Iterations are done to data and parameters until the model achieves accuracy.   Human Iteration: This step involves the human induced iteration where different models are put together to create a fully functional smart system. Here, multiple levels of fitting and refitting happens to achieve a coherent overall goal such as creating a driverless car system or a fully functional AI. Iteration is pivotal to creating smarter AI systems in the near future. The enormous memory requirements for performing multiple iterations on complex data sets continue to pose major challenges. But with increasingly better AI chips, storage options and data transfer techniques, these challenges are getting easier to handle. We believe iterative machine learning techniques will continue to lead the transformation of the AI landscape in the near future.  
Read more
  • 0
  • 0
  • 20499

article-image-python-web-development-frameworks-django-flask
Owen Roberts
22 Dec 2015
5 min read
Save for later

Python Web Development Frameworks: Django or Flask?

Owen Roberts
22 Dec 2015
5 min read
I love Python, I’ve been using it close to three years now after a friend gave me a Raspberry Pi they had grown bored with. In the last year I’ve also started to seriously get into web development for my own personal projects but juggling all these different languages can sometimes get a bit too much for me; so this New Year I’ve promised myself I’m going to get into the world of Python web development. Python web dev has exploded in the last year. Django has been around for a decade now, but with long term support and the wealth of improvements that we’ve seen to the framework in just the last year it’s really reaching new heights of popularity. Not only Django, but Flask’s rise to fame has meant that writing a web page doesn’t have to involve reams and reams of code too! Both these frameworks are about cutting down on time spent coding without sacrificing quality, but which one do you go for? In this blog I’m going to show you the best bundles you need to get started with taking Python to the world of the web with titles I've been recommended - and at only $5 per eBook, hopefully this little hamper list inspires you to give something new a try for 2016! So, first of all which do you start with, Django or Flask? Let’s have a look at each and see what they can do for you. Route #1: Django So the first route to enter the world of Python web dev is Django, also touted as “the web framework for perfectionists with deadlines”. Django is all about clean, pragmatic design and getting to your finished app in as little time as possible. Having been around the longest it's also got a great amount of support meaning it's perfect for larger, more professional projects. The best way to get started is with our Django By Example or Learning Django Web Development titles. Both have everything you need to take the first steps in the world of web development in Python; taking what you already know and applying it in new ways. The By Example title is great as it works through 4 different applications to see how Django works in different situations, while the Learning title is a great supplement to learning the key features that need to be used in every application. Now that the groundwork has been laid, we need to build upon that. With Django we've got to catch up with 10 years of experience and community secrets fast! Django Design Patterns and Best Practices is filled with some of the community's best hacks and cheats to get the most out of developing Django, so if you're a developer who likes to save time and avoid mistakes (and who doesn't?!) then this book is the perfect desk companion for any Django lover. Finally, to top everything off and prepare us for the next steps in the world of Django why not try a new paradigm with Test-Driven Development with Django? I'm honestly one of those developers that hates having to test right at the end, so being able to peel down a complex critical task into layers throughout just makes more sense to me. Route #2: Flask Flask has exploded in popularity in the last year and it's not hard to see why – with the focus on as much minimal code as possible, Python is perfect for developers who are looking to get a quick web page up, as well as those who just hate having to write mountains of code when a single line can do. As an added bonus the creators of the framework looked at Django and took on board feedback from that community as well, so you get the combined force of two different frameworks at your fingertips. Flask is easy to pick up, but difficult to master, so having a good selection of titles to help you along is the best way to get involved in this new world of Python web dev. Learning Flask Framework is the logical first step for getting into Flask. Released last month it's come heartily recommended as the all-in-one first stop to getting the most out of Flask. Want to try a different way to learn though? Well, Learning Flask video is a great supplement to the learning title, it shows us everything we need to start building our first Flask titles in just under 2 hours – almost as quick as it takes the average Flask developer to build their own sites. The Flask Framework Cookbook is the next logical step as a desktop companion for someone just starting their own projects. Having over 80 different recipes to get the most out of the framework is essential for those dipping their feet into this new world without worrying about losing everything. Finally, Flask Blueprints is something a little different, and is especially good for getting the most out of Flask. Now, if you're serious about learning Flask you're likely to get everything you need quickly, but the great thing about the framework is how you apply it. The different projects inside this title make sure you can make the most out of Flask's best features for every project you might come across! Want to explore more Python? Take a look at our dedicated Python page. You'll find our latest titles, as well as even more free content.
Read more
  • 0
  • 0
  • 20465
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-blockchain
Lauren Stephanian
28 Apr 2017
6 min read
Save for later

What is Blockchain?

Lauren Stephanian
28 Apr 2017
6 min read
The difference between Blockchain and Bitcoin Before we explore Blockchain in depth, it’s worth looking at the key differences between Blockchain and Bitcoin, which are very closely associated with each other. Both were conceived by a mysterious figure who goes by the alias Satoshi Nakamoto; while both ideas are revolutionary, the key distinction is this: Blockchain was created for the implementation of Bitcoin, has a broader application. Bitcoin is ultimately just a cryptocurrency, and is actually very similar to any other currency in its use. So, then, what is Blockchain? Blockchain, put simply, is a way to store data or transactions on a growing ledger. The way that it works allows us to safely rely on the data shown to us, because it is built on the concept of decentralized consensus.  Decentralized consensus is reached by a Blockchain because each block containing some data (for example, a certain amount of Bitcoins heading to someone's account) within the chain has an encryption called a hashing function that connects it to the next block along with a time stamp. This chain of blocks continuously grows, and once a transaction is recorded, it is immutable--there is no going back and altering the data, only building on top of it. Everyone sees the same unchanged data on a Blockchain, and therefore actions based on this data, such as sending money to someone, can be safely taken because the information shown cannot be disputed because every party agrees. Other uses for Blockchain Although Blockchain is most commonly associated with Bitcoin, there are many different uses for this technology. Blockchain will change our way of not only working with one another, but also storing data, identifying ourselves, and even voting, by making these actions easier and secure by design. Below are eight ways Blockchain technology will revolutionize the way we conduct business, and the way governments function. All of these areas create opportunities for developers. Cryptocurrencies The most common use for Blockchain is, as mentioned above, to create or mine for cryptocurrencies. You can set up a store to accept these cryptocurrencies as payment, and you can send money around the world to different people, or mine for money by using a tool called a miner, which collects money every time a new transaction is made. These currencies are theoretically totally safeguarded from theft. Digital assets Just as you might do with regular currencies, you can create financial securities using cryptocurrencies. For example you can use Bitcoin to create derivatives, such as futures contracts, based on what you think the future value of Bitcoins will be. You can also create stocks and bonds, and almost any other security you might want to trade. Decentralized exchanges In a similar vein, Blockchain can be used to make financial exchanges safer, faster, and easier to track. A decentralized exchange is an exchange market that does not rely on third parties to hold onto customers’ funds and instead allows customers to trade directly and anonymously with each other. It reduces risk of hacking because there is no single point of failure and it is potentially faster because third parties are not involved. Smart contracts Smart contracts or decentralized applications (DApps) are just contracts that you can use in any way you might use a regular contract. Because they are based on Blockchain technology, they are decentralized and therefore any third parties or middlemen (i.e. lawyers) can be effectively removed from the equation. They are cheaper, faster, and more secure than traditional contracts, which is why governments, banks, and real estate companies are beginning to turn to them. Distributed Cloud Storage Blockchain technology will be a huge disruptor in the data storage industry. Currently a third party, such as Dropbox, controls all your cloud data. Should they want to take it, alter it, or remove it, they are legally allowed to do so and you won't be able to do anything to stop them. However, any one party cannot control decentralized cloud storage, and therefore your data is more secure and untouchable by anyone you wouldn't want to interfere with it. Verifiable Data (Identification) Identity theft is a problem caused by lack of security and information being held by a centralized party. Blockchain can decentralize databases holding verifiable data like identification and make them less susceptible to hacking and theft. Once verifiable data is decentralized, it can easily be checked for accuracy by third parties needing to access it. Data breaches, such as what happened to Anthem in early 2015, are becoming more and more of a common occurrence, which indicates that we need our technology to adapt in order to keep up with the ever-changing landscape of the Internet. Blockchain will be the answer to this; it's just a matter of when this change will happen. Digital voting Perhaps the most relevant item on this list is digital voting. Currently an innumerable amount of countries are host to reports of voter fraud, and whether they are true or not, this casts doubts on the credibility of any administration’s leadership. Additionally, using Blockchain technology could eventually allow for online voting, which would assist in correcting low voter turnout because it makes voting easier and more accessible to all citizens. In 2014, a political party in Denmark became the first group to use Blockchain in their voting process. This secured their results and made them more credible. It would be advantageous for more countries to begin using Blockchain to verify election results and some might even begin adopting the ability to count online votes for increased participation. Academic credentialing At least at one point in your life, you have probably heard about people lying about their alma mater on their resume, or even editing their transcripts. By having schools and certification programs upload credentials to a decentralized database, Blockchain can make verifying these important details fast and painless for all prospective employers. What are the key takeaways? Despite the lack of implementation in current businesses and governments, Blockchain will change our society for the better. It is already established in the footbed of the tech realm, and will remain until the next great revolutionary technology comes along to change the way we store and share data. About the Author Lauren Stephanian is a software developer by training and an analyst for the structured notes trading desk at Bank of America Merrill Lynch. She is passionate about staying on top of the latest technologies and understanding their place in society. When she is not working, programming, or writing, she is playing tennis, traveling, or hanging out with her good friends in Manhattan or Brooklyn. You can follow her on Twitter or Medium at @lstephanian or via her website, http://lstephanian.com.
Read more
  • 0
  • 0
  • 20358

article-image-deep-reinforcement-learning-trick-or-treat
Bhagyashree R
31 Oct 2018
2 min read
Save for later

Deep reinforcement learning - trick or treat?

Bhagyashree R
31 Oct 2018
2 min read
Deep Reinforcement Learning (Deep RL) is the new buzzword in the machine learning world. Deep RL is an approach which combines reinforcement learning and deep learning in order to achieve human-level performance. It brings together the self-learning approach to learn successful strategies that lead to the greatest long-term rewards and allows the agents to construct and learn their own knowledge directly from raw inputs. With the fusion of these two approaches, we saw the introduction of many algorithms, starting with DeepMind’s Deep Q Network (DQN). It is a deep variant of the Q-learning algorithm. This algorithm reached human-level performance in playing Atari games. Combining Q-learning with reasonably sized neural networks and some optimization tricks, you can achieve human or superhuman performance in several Atari games. Deep RL resulted in one of the notable advancements in the game of AlphaGo.The AI agent by DeepMind was able to beat the human world champions Lee Sedol (4-1) and Fan Hui (5-0). DeepMind then further released advanced versions of their Agent called AlphaGO Zero and AlphaZero. Many recent works from the researchers at UC Berkeley have shown how both reinforcement learning and deep reinforcement learning have enabled the control of complex robots, both for locomotion and navigation. Despite these successes, it is quite difficult to find cases where deep RL has added any practical real-world value. The current status is that it is still a research topic. One of its limitations is that it assumes the existence of a reward function, which is either given or is hand-tuned offline. To get the desired results, your reward function must capture exactly what you want. RL has an annoying tendency to overfit to your reward, resulting in things you haven’t expected. This is the reason why Atari is a benchmark, as it is not only easy to get a lot of samples, but the goal is fairly straightforward i.e to maximize score. With so many researchers working towards introducing improved Deep RL algorithms, it surely is a treat. AlphaZero: The genesis of machine intuition DeepMind open sources TRFL, a new library of reinforcement learning building blocks Understanding Deep Reinforcement Learning by understanding the Markov Decision Process [Tutorial]
Read more
  • 0
  • 0
  • 20325

article-image-why-deepmind-open-sourced-sonnet
Sugandha Lahoti
03 Apr 2018
3 min read
Save for later

Why DeepMind made Sonnet open source

Sugandha Lahoti
03 Apr 2018
3 min read
DeepMind has always open sourced their projects with a bang. Last year, it announced that it is going to open source Sonnet, a library for quickly building neural network modules with Tensorflow. Deepmind shifted from Torch to Tensorflow as their choice of framework since early 2016, after it was acquired by Google in 2014. Why Sonnet if you have TensorFlow? Since adopting TensorFlow as the choice of their framework, DeepMind has enjoyed the flexibility and adaptiveness of TF for building higher-level frameworks. In order to build neural network modules with Tensorflow, they created a framework called Sonnet. Sonnet doesn’t typically replace TensorFlow; it just eases the process of constructing neural networks. Prior to Sonnet, DeepMind developers were forced to become intimately familiar with the underlying TensorFlow graphs in order to correctly architect its applications. With Sonnet, the creation of neural network components is quite easy as it first constructs Python objects which represent some part of a neural network, and then separately connect these objects into the TensorFlow computation graph. What makes Sonnet special? Sonnet uses Modules. Modules encapsulate elements of a neural network which in turn abstracts low-level aspects of TensorFlow applications. Sonnet enables developers to build their own Modules using a simple programming model. These Modules simplify the neural network training and can help to implement individual neural networks that can be combined to implement higher-level networks. Developers can also easily extend Sonnet by implementing their own modules. Using Sonnet, it becomes easier to switch between different models, allowing engineers to freely conduct experiments without worrying about hampering their entire projects. Why open source Sonnet? The announcement of Sonnet open sourcing came on April 7, 2017. Most people appreciated it as a move in the right direction. One of the focal purpose of DeepMind to open source Sonnet was to make the developer community to use Sonnet to take their own research forwards.  According to FossBytes, "DeepMind foresees Sonnet to be used by the community as a research propellant." With this open sourcing, the machine learning community can then more actively contribute back by utilizing Sonnet in their own projects. Moreover, if the community becomes accustomed and acquainted with DeepMind’s internal libraries, it will become easier for the DeepMind group to release other Machine learning models alongside research papers. Certain experienced developers also point out that using TensorFlow and Sonnet together is similar to using TensorFlow and Torch together, with a Reddit comment stating “DeepMind's trying to turn TensorFlow into Torch”. Nevertheless, open sourcing of Sonnet is seen as DeepMind’s part of their broader commitment to open source AI research. Also, as Sonnet is adopted by the community more similar frameworks are also likely to develop that make neural network construction easier using TensorFlow as the underlying runtime. Taking a further step towards democratization of machine learning and its subsidies. Sonnet is already available on GitHub and will be regularly updated by the DeepMind team to match the in-house version.
Read more
  • 0
  • 0
  • 20316

article-image-8-myths-rpa-robotic-process-automation
Savia Lobo
08 Nov 2017
9 min read
Save for later

8 Myths about RPA (Robotic Process Automation)

Savia Lobo
08 Nov 2017
9 min read
Many say we are on the cusp of the fourth industrial revolution that promises to blur the lines between the real, virtual and the biological worlds. Amongst many trends, Robotic Process Automation (RPA) is also one of those buzzwords surrounding the hype of the fourth industrial revolution. Although poised to be a $6.7 trillion industry by 2025, RPA is shrouded in just as much fear as it is brimming with potential. We have heard time and again how automation can improve productivity, efficiency, and effectiveness while conducting business in transformative ways. We have also heard how automation and machine-driven automation, in particular, can displace humans and thereby lead to a dystopian world. As humans, we make assumptions based on what we see and understand. But sometimes those assumptions become so ingrained that they evolve into myths which many start accepting as facts. Here is a closer look at some of the myths surrounding RPA. [dropcap]1[/dropcap] RPA means robots will automate processes The term robot evokes in our minds a picture of a metal humanoid with stiff joints that speaks in a monotone. RPA does mean robotic process automation. But the robot doing the automation is nothing like the ones we are used to seeing in the movies. These are software robots that perform routine processes within organizations. They are often referred to as virtual workers/digital workforce complete with their own identity and credentials. They essentially consist of algorithms programmed by RPA developers with an aim to automate mundane business processes. These processes are repetitive, highly structured, fall within a well-defined workflow, consist of a finite set of tasks/steps and may often be monotonous and labor intensive. Let us consider a real-world example here - Automating the invoice generation process. The RPA system will run through all the emails in the system, and download the pdf files containing details of the relevant transactions. Then, it would fill a spreadsheet with the details and maintain all the records therein. Later, it would log on to the enterprise system and generate appropriate invoice reports for each detail in the spreadsheet. Once the invoices are created, the system would then send a confirmation mail to the relevant stakeholders. Here, the RPA user will only specify the individual tasks that are to be automated, and the system will take care of the rest of the process. So, yes, while it is true that RPA involves robots automating processes, it is a myth that these robots are physical entities or that they can automate all processes. [dropcap]2[/dropcap] RPA is useful only in industries that rely heavily on software “Almost anything that a human can do on a PC, the robot can take over without the need for IT department support.” - Richard Bell, former Procurement Director at Averda RPA is a software which can be injected into a business process. Traditional industries such as banking and finance, healthcare, manufacturing etc that have significant tasks that are routine and depend on software for some of their functioning can benefit from RPA. Loan processing and patient data processing are some examples. RPA, however, cannot help with automating the assembly line in a manufacturing unit or with performing regular tests on patients. Even in industries that maintain daily essential utilities such as cooking gas, electricity, telephone services etc RPA can be put to use for generating automated bills, invoices, meter-readings etc. By adopting RPA, businesses irrespective of the industry they belong to can achieve significant cost savings, operational efficiency, and higher productivity. To leverage the benefits of RPA, rather than understanding the SDLC process, it is important that users have a clear understanding of business workflow processes and domain knowledge. Industry professionals can be easily trained on how to put RPA into practice. The bottom line - RPA is not limited to industries that rely heavily on software to exist. But it is true that RPA can be used only in situations where some form of software is used to perform tasks manually. [dropcap]3[/dropcap] RPA will replace humans in most frontline jobs Many organizations employ a large workforce in frontline roles to do routine tasks such as data entry operations, managing processes, customer support, IT support etc. But frontline jobs are just as diverse as the people performing them. Take sales reps for example. They bring new business through their expert understanding of the company’s products, their potential customer base coupled with the associated soft skills. Currently, they spend significant time on administrative tasks such as developing and finalizing business contracts, updating the CRM database, making daily status reports etc. Imagine the spike in productivity if these aspects could be taken off the plates of sales reps and they could just focus on cultivating relationships and converting leads. By replacing human efforts in mundane tasks within frontline roles, RPA can help employees focus on higher value-yielding tasks. In conclusion, RPA will not replace humans in most frontline jobs. It will, however, replace humans in a few roles that are very rule-based and narrow in scope such as simple data entry operators or basic invoice processing executives. In most frontline roles like sales or customer support, RPA is quite likely to change significantly at least in some ways how one sees their job responsibilities. Also, the adoption of RPA will generate new job opportunities around the development, maintenance, and sale of RPA based software. [dropcap]4[/dropcap] Only large enterprises can afford to deploy RPA The cost of implementing and maintaining the RPA software and training employees to use it can be quite high. This can make it an unfavorable business proposition for SMBs with fairly simple organizational processes and cross-departmental considerations. On the other hand, large organizations with higher revenue generation capacity, complex business processes, and a large army of workers can deploy an RPA system to automate high-volume tasks quite easily and recover that cost within a few months.   It is obvious that large enterprises will benefit from RPA systems due to the economies of scale offered and faster recovery of investments made. SMBs (Small to medium-sized businesses) can also benefit from RPA to automate their business processes. But this is possible only if they look at RPA as a strategic investment whose cost will be recovered over a longer time period of say 2-4 years. [dropcap]5[/dropcap] RPA adoption should be owned and driven by the organization's IT department The RPA team handling the automation process need not be from the IT department. The main role of the IT department is providing necessary resources for the software to function smoothly. An RPA reliability team which is trained in using RPA tools does not include IT professionals but rather business operations professionals. In simple terms, RPA is not owned by the IT department but by the whole business and is driven by the RPA team. [dropcap]6[/dropcap] RPA is an AI virtual assistant specialized to do a narrow set of tasks An RPA bot performs a narrow set of tasks based on the given data and instructions. It is a system of rule-based algorithms which can be used to capture, process and interpret streams of data, trigger appropriate responses and communicate with other processes. However, it cannot learn on its own - a key trait of an AI system. Advanced AI concepts such as reinforcement learning and deep learning are yet to be incorporated in robotic process automation systems. Thus, an RPA bot is not an AI virtual assistant, like Apple’s Siri, for example. That said, it is not impractical to think that in the future, these systems will be able to think on their own, decide the best possible way to execute a business process and learn from its own actions to improve the system. [dropcap]7[/dropcap] To use the RPA software, one needs to have basic programming skills Surprisingly, this is not true. Associates who use the RPA system need not have any programming knowledge. They only need to understand how the software works on the front-end, and how they can assign tasks to the RPA worker for automation. On the other hand, RPA system developers do require some programming skills, such as knowledge of scripting languages. Today, there are various platforms for developing RPA tools such as UIPath, Blueprism and more, which empower RPA developers to build these systems without any hassle, reducing their coding responsibilities even more. [dropcap]8[/dropcap] RPA software is fully automated and does not require human supervision This is a big myth. RPA is often misunderstood as a completely automated system. Humans are indeed required to program the RPA bots, to feed them tasks for automation and to manage them. The automation factor here lies in aggregating and performing various tasks which otherwise would require more than one human to complete. There’s also the efficiency factor which comes into play - the RPA systems are fast, and almost completely avoid faults in the system or the process that are otherwise caused due to human error. Having a digital workforce in place is far more profitable than recruiting human workforce. Conclusion One of the most talked about areas in terms of technological innovations, RPA is clearly still in its early days and is surrounded by a lot of myths. However, there’s little doubt that its adoption will take off rapidly as RPA systems become more scalable, more accurate and deploy faster. AI, cognitive, and Analytics-driven RPA will take it up a notch or two, and help the businesses improve their processes even more by taking away dull, repetitive tasks from the people. Hype can get ahead of the reality, as we've seen quite a few times - but RPA is an area definitely worth keeping an eye on despite all the hype.
Read more
  • 0
  • 0
  • 20308
article-image-how-should-web-developers-learn-machine-learning
Chris Tava
12 Jun 2017
6 min read
Save for later

How should web developers learn machine learning?

Chris Tava
12 Jun 2017
6 min read
Do you have the motivation to learn machine learning? Given its relevance in today's landscape, you should be motivated to learn about this field. But if you're a web developer, how do you go about learning it? In this article, I show you how. So, let’s break this down. What is machine learning? You may be wondering why machine learning matters to you, or how you would even go about learning it. Machine learning is a smart way to create software that finds patterns in data without having to explicitly program for each condition. Sounds too good to be true? Well it is. Quite frankly many of the state-of-the-art solutions to the toughest machine learning problems don’t even come close to reaching 100 percent accuracy and precision. This might not sound right to you if you’ve been trained, or have learned, to be precise and deterministic with the solutions you provide to the web applications you’ve worked on. In fact, machine learning is such a challenging problem domain that data scientists describe problems to be tractable or not. Computer algorithms can solve tractable problems in a reasonable amount of time with a reasonable amount of resources, whereas, in-tractable problems simply can’t be solved. Decades more of R&D is needed at a deep theoretical level, to bring approaches and frameworks forward that will then take years to be applied and be useful to society. Did I scare you off? Nope? Okay great. Then you accept this challenge to learn machine learning.  But before we dive into how to learn machine learning, let's answer the question: Why does learning machine learning matter to you?  Well, you're a technologist and as a result, it’s your duty, your obligation, to be on the cutting edge. The technology world is moving at a fast clip and it’s accelerating. Take for example, the shortened duration between public accomplishments of machine learning versus top gaming experts. It took a while to get to the 2011 Watson v. Jeopardy champion, and far less time between AlphaGo and Libratus. So what's the significance to you and your professional software engineering career? Elementary dear my Watson—just like the so-called digital divide between non-technical and technical lay people, there is already the start of a technological divide between top systems engineers and the rest of the playing field in terms of making an impact and disrupting the way the world works.  Don’t believe me? When’s the last time you’ve programmed a self-driving car or a neural network that can guess your drawings? Making an impact and how to learn machine learning The toughest part about getting started with machine learning is figuring out what type of problem you have at hand because you run the risk of jumping to potential solutions too quickly before understanding the problem. Sure you can say this of any software design task, but this point can’t be stressed enough when thinking about how to get machines to recognize patterns in data. There are specific applications of machine learning algorithms that solve a very specific problem in a very specific way and it’s difficult to know how to solve a meta-problem if you haven’t studied the field from a conceptual standpoint. For me, a break through in learning machine learning came from taking Andrew Ng’s machine learning course on courser. So taking online courses can be a good way to start learning.  If you don’t have the time, you can learn about machine learning through numbers and images. Let's take a look.  Numbers Conceptually speaking, predicting a pattern in a single variable based on a direct—otherwise known as a linear relationship with another piece of data—is probably the easiest machine learning problem and solution to understand and implement.  The following script predicts the amount of data that will be created based on fitting a sample data set to a linear regression model: https://github.com/ctava/linearregression-go. Because there is somewhat of a fit of the sample data to a linear model, the machine learning program predicted that the data created in the fictitious Bob’s system will grow from 2017, 2018.  Bob’s Data 2017: 4401Bob’s Data 2018 Prediction: 5707  This is great news for Bob and for you. You see, machine learning isn’t so tough after all. I’d like to encourage you to save data for a single variable—also known as feature—to a CSV file and see if you can find that the data has a linear relationship with time. The following website is handy in calculating the number of dates between two dates: https://www.timeanddate.com/date/duration.html. Be sure to choose your starting day and year appropriately at the top of the file to fit your data. Images Machine learning on images is exciting! It’s fun to see what the computer comes up with in terms of pattern recognition, or image recognition. Here’s an example using computer vision to detect that grumpy cat is actually a Persian cat: https://github.com/ctava/tensorflow-go-imagerecognition. If setting up Tensorflow from source isn’t your thing, not to worry. Here’s a Docker image to start off with: https://github.com/ctava/tensorflow-go. Once you’ve followed the instructions in the readme.md file, simply:  Get github.com/ctava/tensorflow-go-imagerecognition Run main.go -dir=./ -image=./grumpycat.jpg Result: BEST MATCH: (66% likely) Persian cat Sure there is a whole discussion on this topic alone in terms of what Tensorflow is, what’s a tensor, and what’s image recognition. But I just wanted to spark your interest so that maybe you’ll start to look at the amazing advances in the computer vision field. Hopefully this has motivated you to learn more about machine learning based on reading about the recent advances in the field and seeing two simple examples of predicting numbers, and classifying images.I’d like to encourage you to keep up with data science in general. About the Author  Chris Tava is a Software Engineering / Product Leader with 20 years of experience delivering applications for B2C and B2B businesses. His specialties include: program strategy, product and project management, agile software engineering, resource management, recruiting, people development, business analysis, machine learning, ObjC / Swift, Golang, Python, Android, Java, and JavaScript.
Read more
  • 0
  • 0
  • 20279

article-image-capsnet-capsule-networks-convolutional-neural-networks-cnns
Savia Lobo
13 Dec 2017
5 min read
Save for later

CapsNet: Are Capsule networks the antidote for CNNs kryptonite?

Savia Lobo
13 Dec 2017
5 min read
Convolutional Neural networks (CNNs), are a group from the neural network family that has manifested in areas such as Image recognition, classification, etc. They are one of the popular neural network models present in nearly all of the image recognition tasks that provide state-of-the-art-results. However, these CNNs have drawbacks, which are to be discussed later in the article. In order to address the issue with CNNs, Geoffrey Hinton, popularly known as the Godfather of Deep Learning, recently proposed a research paper along with two other researchers, Sara Sabour and Nicholas Frosst. In this paper, they introduced CapsNet or Capsule Network--a neural network, based on multi-layer capsule system. Let’s explore the issue with CNNs and how CapsNet came as an advancement to it. What is the issue with CNNs? Convolutional Neural Network or CNNs are known to seamlessly handle image classification tasks. They are experts in learning at a granular level; where the lower layers detect edges and shape of an object, and the higher layers detect the image as a whole. However, CNNs perform poorly when an image possesses a slightly different orientation (rotation or a tilt), as it compares every image with the ones it learns during training. For instance, if an image of a face is to be detected, it checks for facial features such as nose, two eyes, mouth, eyebrows, etc; irrespective of the placement. This means CNNs may identify an incorrect face in cases where the placement of an eye and the nose is not as conventionally expected, for example in case of the profile view. So, the orientation and the spatial relationships between the objects within an image is not considered by a CNN. To make CNNs understand orientation and spatial relationships, they were trained profusely with images taken from all possible angles. Unfortunately, it resulted in excess amount of time required to train the model. Also, the performance of the CNNs did not improve largely. Pooling methods were also introduced at each layer within the CNN model for two reasons; first  to reduce the time invested in training, and second to bring out positional invariance within CNNs. It resulted in triggering false positives in an image, i.e., it detected the object within an image but did not check its orientation. Also it incorrectly declared it as a right image. Thus, positional invariance made the CNNs susceptible to minute changes in viewpoint. Instead of invariance, what CNNs require is equivariance-- a feature that makes CNNs adapt to change in rotation or proportion within an image. This equivariance feature is now possible via Capsule Network! The Solution: Capsule Network CapsNet or Capsule network is an encapsulation of nested neural network layers. Traditional neural network contains multiple layers whereas a capsule network contains multiple layers within a single capsule. CNNs go deeper in terms of height, whereas the capsule network deepens in terms of nesting or internal structure. Such a model is highly robust to geometric distortions and transformations, which are a result of non-ideal camera angles. Thus, it is able to exceptionally handle orientations, rotations and so on. CapsNet Architecture Source: https://arxiv.org/pdf/1710.09829.pdf Key Features: Layer based Squashing In a typical Convolutional Neural Network, the squashing function is added to each layer of the CNN model. A squashing function compresses the input to one of the ends of a small interval, introducing nonlinearity to the neural network and enables the network to be effective. Whereas, in a Capsule network, the squashing function is applied to the vector output of each capsule. Given below is a squashing function proposed by Hinton in his research paper. Squashing function Source: https://arxiv.org/pdf/1710.09829.pd Instead of applying non-linearity to each neuron, the squashing function applies squashing to a group of neurons i.e the capsule. To be more precise, it applies nonlinearity to the vector output of each capsule. The squashing function also tries to squash the vector output to zero if it is a small vector. If the vector is too long, the function tries to limit the output vector to 1. Dynamic Routing Dynamic routing algorithm in CapsNet replaces the scalar-output feature detectors of the CNN with the vector-output capsules. Also, the max pooling feature in CNNs, which led to positional invariance, is replaced with ‘routing by agreement’. The algorithm ensures that when they forward propagate the data, it goes to the next most relevant capsule in the layer above. Although dynamic routing adds an extra computational cost to the capsule network, it has been proved to be advantageous to the network by making it more scalable and adaptable. Training the Capsule Network The capsule network is trained using the MNIST. MNIST is a dataset which includes more than 60,000 handwritten digit images. It is used to test machine learning algorithms. The capsule model is trained for 50 epochs with a batch size of 128 parts, where each epoch is responsible for a complete run through the training dataset. A TensorFlow implementation of the CapsNet based on Hinton’s research paper is available in GitHub repository. Similarly, CapsNet can also be implemented using other deep learning frameworks such as Keras, PyTorch, MXNet, etc. CapsNet is a recent breakthrough in the field of Deep learning and have a promise to benefit organizations with accurate image recognition tasks. Also, implementations with CapsNet is slowly catching up and is expected to reach at par like CNNs. They have been trained on a very simplistic dataset i.e the MNIST. They will still require to prove themselves on various other datasets. However, as time advances and we see CapsNet being trained within different domains, it will be exciting to discern how it moulds itself as a faster and more efficient training technique for deep learning models.
Read more
  • 0
  • 0
  • 20246

article-image-customer-relationship-management-just-got-better-artificial-intelligence
Savia Lobo
28 Jan 2018
8 min read
Save for later

Customer Relationship management just got better with Artificial Intelligence

Savia Lobo
28 Jan 2018
8 min read
According to an International Data Corporation (IDC) report, Artificial intelligence (AI) has the potential to impact many areas of customer relationship management (CRM). AI as an armor will ease out mundane tasks for the CRM teams, which implies they will be able to address more customer queries through an automated approach. An AI-based expert CRM offers highly predictive and intuitive ways to customer problems, thus grabbing maximum customer attention. With AI, CRM platforms within different departments such as sales, finance, marketing etc. do not limit themselves to getting service feedback from their customers. But they can also gain information based on the data that customers generate online i.e the social media or IoT devices. With such massive amount of data hailing from various channels, it becomes a bit tricky for organizations to keep a track of its customers. Not only this, but to extract detailed insights from huge amount of data becomes all the more difficult. And here is the gap where, organizations feel the need to bring in an AI-based optimized approach for their CRM platform. The AI-enabled platform can assist CRM teams to gain insights from the large aggregation of customer data, while also paving a way for seamless customer interactions. Organizations can not only provide customers with helpful suggestions, but also recommend products to boost their business profitability. AI-infused CRM platforms can take over straightforward tasks such as client feedback, that otherwise is time consuming. It allows businesses to focus on customers that provide higher business value, which might have got neglected previously. It also acts as a guide for executive level employees via a virtual assistant, allowing them to tackle customer queries without any assistance from senior executives. AI techniques such as Natural language processing(NLP) and predictive analytics are used within the CRM domain, to gain intelligent insights in order to enhance human decision making. NLP interprets incoming emails, categorizes them on the basis of intent, and automatically drafts responses by identifying the priority level. Predictive Analytics helps in detecting the optimal time for solving customer queries, and the mode of communication that will best fit to engage with the customer. With such functionalities, a smarter move towards digitizing organizational solutions can be achieved  reaping huge profits for organizations who wish to leverage it. How AI is transforming CRM Businesses aim to satisfy customers who utilize their services. This is because, keeping a customer happy can lead to further incrementation in revenue generation. Organizations can achieve this rapidly with the help of AI. Salesforce, the market leader in the CRM space, integrated an AI assistant which is popularly known as Einstein. Einstein makes CRM an easy-to-use platform by simply allowing customers to import their data on Salesforce and automatically provides ready-to-crunch data driven insights across different channels. Other organizations such as SAP and Oracle are implementing AI-based technologies for their CRM platforms to provide an improvised customer experience. Let’s explore how AI benefits within an organization: Steering Sales With AI, the sales team can shift their focus from the mundane administrative tasks and get to know their customers better. Sales CRM team leverages novel scoring techniques, which help in prioritizing quality leads, thus generating maximum revenue for the organization. Sales leaders, with help of AI can work towards improving sales productivity. After analyzing company’s historical data and employee activities, the AI-fused CRM software can present a performance report of the top sales representatives. Such a feature helps sales leaders to strategize what the bottom line representatives should learn from the top representatives to drive conversations with their customers that show a likelihood for sales generation. People.ai, a sales management platform, utilize AI to deliver performance analytics, personalized coaching, and provide reviews for their sales pipeline. This can assist sales leaders get a complete view of sales activities going on within their organizations. Marketing it better To trigger a customer sale requires extensive push marketing strategies.With Artificial Intelligence enabled marketing, customers are driven into a predictive journey, which ensures each journey to end up into a sale or a subscription. Both ways it is a win-win situation for the organizations. Predictive scoring can intelligently determine the likelihood of a customer to subscribe to a newsletter or trigger a purchase. AI can also analyze images across various social media sources such as Pinterest, Facebook, and can provide suggestions for visuals of an upcoming advertising campaign. Also, by carrying out sentiment analysis on product reviews and customer feedback, the marketing team can take into account, user’s sentiment about a particular brand or product. This helps brands to announce discount offers in case of a decreased sale, or increase the production of a product in demand. Marketo, a marketing automation platform includes a software which aids different CRM platforms to gain rich behavioral insights of their customers and to drive business strategies. 24*7 customer support Whenever a customer query arises within a CRM, AI anticipates the likely issues and resolves them before it results into a problem. Different customer cases are classified and directed to the right service agent to address with the help of predictive analytics techniques. Also, NLP-based digital assistants known as chatbots are used to analyze the written content within e-mails. A chatbot efficiently responds to customer e-mails; in most rare cases, it directs the e-mail to a service agent. Chatbots can even notify a customer about an early-bird offer to purchase a product, which they are likely to buy. It can also issue meetings and notify the same by scheduling reminders­‑given the era of push notifications and smart wearables. Hence, with AI into CRM, organizations can not only offer customers better services but also provide 24*7 support. Agent.ai, an AI-based customer service platform, allows organizations to provide a 24*7*365 customer support including holidays, weekends, and non-staffed hours. Application development no more a developer’s play Building an application has become an important milestone to achieve for any organization. If the application has a seamless and user-friendly interface, it is favoured by many customers and thus, the organization gets more customer traction. Building an application was considered as ‘a developers job only’ as it involves coding. However, due to the rise in platforms that help build an application with lesser coding or in fact no-coding, any non-coder can easily develop an application. CRM platforms helps businesses to build applications, which provides insight driven predictions and recommendation solutions to their customers. Salesforce assures their customers that each application built on their platform includes intelligent data modeling, tracking, and monitoring. Business users, data scientists, or any non-developer, can now build applications without learning to code. This helps them to create prediction-based applications their way; without the IT hassle. Challenges & Limitations AI implementations are becoming common with an increased number of organizations adopting it both on a small and a large scale. Many businesses are moving towards a smart customer management by infusing AI within their organizations. AI undoubtedly brings in an ease of work, but there are challenges that the CRM platform can face, which if unaddressed may cause revenue declination for businesses. Below are the challenges which organizations might face while setting up AI in their CRM platform: Outdated data: Organizations collect a huge amount of data during various business activities to drive meaningful insights about sales, customer preferences, etc. This data is a treasure trove for the marketing team, to plan strategies in order to attract more new customers and retain the existing ones. On the contrary, if the data provided is not updated,  CRM teams may find it difficult to understand the current customer relationship status. To avoid this, a comprehensive data cleanup project is essential to maintain better quality of data. Partially automated: AI creates an optimized environment  for the CRM with the use of  predictive analytics and natural language processing for better customer engagement. This eases out the mundane elements for the CRM team, and they can focus on other strategic outcomes. This does not imply that AI is completely replacing humans. Instead, a human touch is required to monitor if the solutions given by the AI benefits the customer and how they can tweak it to a much more smarter AI. Intricacies of language: An AI is trained on data which includes various set of phrases and questions, and also the desired output that it should give. If the query input by the customer is not phrased in a correct manner, the AI is unable to provide them with correct solutions. Hence, customers have to take precautions while asking their queries and phrase it in the correct manner, else the machine would not understand what the customer aims to ask. Infusing AI into CRM has multiple benefits, but the three most important ones include predictive scoring, forecasting, and recommendations. These benefits empower CRM to outsmart its traditional counterpart by helping organizations to serve its customers with state-of-the-art results. Customers appreciate when their query is addressed in lesser time,leaving a positive remark on the organization. Additionally we have digital assistants to assist firms in solving customer query quickly.
Read more
  • 0
  • 0
  • 20177
article-image-essential-tools-for-go-programming
Nicholas Maccharoli
14 Jan 2016
5 min read
Save for later

Essential Tools for Go Programming

Nicholas Maccharoli
14 Jan 2016
5 min read
Golang as a programming language is a pleasure to work with, but the reason for this also comes largely in part from the great community around the language and its modern tool set, both from standard distribution and third-party tools. The go command On a system with go installed, type go with no arguments to see its quick help menu. Here, you will see the basic go commands, such as build, run, get, install, fmt, and so on. Go ahead and take a minute to run go help on some verbs that look interesting; I promise I'll be here when you get back. Basic Side options The go build and go run commands do what you think they do, as is also the case with go test, which runs any test files in the directory it is passed. The go clean command wipe out all the compiled and executable files from the directory in which it is run. Run this command when you want to force a build to be made entirely from source again. The go version command prints out the version and build info, as you might expect. The go env command is very useful when you want to see exactly how your environment is set up. Running it will show where all your environment variables point and will also make you aware of which ones are still not properly set. go doc: Which arguments did this take again? Whenever in doubt, just give go doc a call. Just running go doc [Package Name] will give you a high-level readout of the types, interfaces, and behavior defined in this package; that is, go doc net/http will give you all the function stubs and types defined. If you just need to check the order or types of arguments that a function takes, run go doc on the package and use a tool like grep to grab the relevant line, such as go doc net/http | grep -i servecontent This will produce just what we need! func ServeContent(w ResponseWriter, req *Request, name string, modtime time.Time, content io.ReadSeeker) If you need more detail on the function or type, just run the go doc command with the package and function name, and you will get a quick description of this function or type. gofmt This little tool is quite a time-saver. I mainly use it to ensure that my source files are stylistically correct, and I also use the -s flag to let gofmt simplify my code. Just run gofmt -w on a file or an entire directory to fix up the files in place. After running this command, you should see the proper use of white space and indentation corrected to eight space tabs. Here is a diff of a file with poor formatting that I ran through gofmt: Original package main import "fmt" func main() { hello_to := []string{"Dust", "Trees", "Plants", "Carnivorous plants"} for _, value := range hello_to { fmt.Printf("Hello %v!n",value) } } After running gofmt -w Hello.go package main import "fmt" func main() { hello_to := []string{"Dust", "Trees", "Plants", "Carnivorous plants"} for _, value := range hello_to { fmt.Printf("Hello %v!n", value) } } As you can see, the indentation looks much better and reads way easier! The magic of gofmt -s The -s flag to gofmt helps clean up unnecessary code; so, the intentionally ignored values in the following code: hello_to := []int{1, 2, 3, 4, 5, 6} for count, _ := range hello_to { fmt.Printf("%v: Hello!n", count) } Would get converted to the following after running –s: hello_to := []int{1, 2, 3, 4, 5, 6} for count, _ := range hello_to { fmt.Printf("%v: Hello!n", count) } The awesomeness of go get One of the really cool features of the go command is that go get it works seamlessly with code hosted on GitHub as well as repositories hosted elsewhere. A note of warning Make sure that $GOPATH is properly set (this is usually exported as a variable in your shell). You may have a line such as “export GOPATH=$HOME” in your shell's profile file. Nabbing a library off of GitHub Say, we see this really neat library we want to use called fastHttp. Using only the go tool, we can fetch the library and get it ready for use all with just: go get github.com/valyala/fasthttp Now, all we have to do is import it with the exact same path, and we can start using the library right away! Just type this and it should do the trick: import "github.com/valyala/fasthttp" In the event that you want to have a look around in the library you just downloaded with go get, just type cd into $GOPATH/src/[Path that was provided to get command]—in this case, $GOPATH/src/github.com/valyala/fasthttp—and feel free to inspect the source files. I am also happy to inform you that you can also use go doc with the libraries you download in the exact same way as you use go doc when interacting with the standard library! Try it: type go doc fasthttp (you might want to tack on less since its a little bit long to type go doc fasthttp | less). Those are only stock features and options! The go tool is great and gets the job done, but there are also other great alternatives to some of the go tool's features, such as the godep package manager. If you have some time, I think it’s worth the time investment to learn! About the author Nick Maccharoli is an iOS/backend developer and an open source enthusiast working at a start-up in Tokyo and enjoying the current development scene. You can see what he is up to at @din0sr or github.com/nirma.
Read more
  • 0
  • 0
  • 20168

article-image-can-cryptocurrency-establish-a-new-economic-world-order
Amarabha Banerjee
22 Jul 2018
5 min read
Save for later

Can Cryptocurrency establish a new economic world order?

Amarabha Banerjee
22 Jul 2018
5 min read
Cryptocurrency has already established one thing - there is a viable alternative to dollars and gold as a measure of wealth. Our present economic system is flawed. Cryptocurrencies, if utilized properly, can change the way the world deals with money and wealth. But can it completely overthrow the present system and create a new economic world order? To know the answer to this we will have to understand the concept of cryptocurrencies and the premise for their creation. Money - The weapon to control the world Money is a measure of wealth, which translates into power. The power centers have largely remained the same throughout history, be it a monarchy, or autocracy or democracy. Power has shifted from one king to one dictator, to a few elected/selected individuals. To remain in power, they had to control the source and distribution of money. That’s why till date, only the government can print money and distribute it among citizens. We can earn money in exchange for our time and skills or loan money in exchange for our future time. But there’s only so much of time that we can give away and hence the present day economy always runs on the philosophy of scarcity and demand. The money distribution follows a trickle down approach in a pyramid structure. Source: Credit Suisse Inception of Cryptocurrency - Delocalization of money It’s abundantly clear from the image above that while printing of money is under the control of the powerful and the wealth creators, the pyramidal distribution mechanism also has ensured very less money flows to the bottom most segments of the population. The money creators have been ensuring their safety and prosperity throughout history, by accumulating chunks of money for themselves. Subsequently, the global wealth gap has increased staggeringly. This could have possibly triggered the rise of cryptocurrencies, as a form of an alternative economic system, that theoretically, doesn’t just accumulate at the top, but also rewards anyone who is interested in mining these currencies and spending their time and resources. The main concept that made this possible was the distributed computing mechanism which has gained tremendous interest in recent times. Distributed Computing, Blockchain & the possibilities The foundation of our present economic system is a central power, be it government or a ruler or dictator. The alternative of this central system is a distributed system, where every single node of communication contains the power of decision making and is equally important for the system. So if one node is cut-off, the system will not fall apart, it will keep on functioning. That’s what makes distributed computing terrifying for the centralized economic systems. Because they can’t just attack the creator of the system or use a violent hack to bring down the entire system. Source: Medium.com When the white paper on Cryptocurrencies was first published by the anonymous Satoshi Nakamoto, there was this hope of constituting a parallel economy, where any individual with an access to a mobile phone and internet might be able to mine bitcoins and create wealth, for not just himself/herself, but for the system also. Satoshi himself invented the concept of Blockchain, an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way. Blockchain was the technology on top of which the first unit of Cryptocurrency, Bitcoins, were created. The concept of Bitcoin mining seemed revolutionary at that time. The more people that joined the system, the more enriched the system would become. The hope was that it would make the mainstream economic system take note and cause a major overhaul of the wealth distribution system. But sadly, none of that seems to have taken place yet. The phase of Disillusionment The reality is that bitcoin mining capabilities were controlled by system resources. The creators also had accumulated enough bitcoins for themselves similar to the traditional wealth creation system. Satoshi’s Bitcoin holdings were valued at $19.4 Billion during the Dec 2017 peak, making him the 44th richest person in the world during that time. This basically meant that the wealth distribution system was at fault again, very few could get their hands onto Bitcoins as their prices in traditional currencies had climbed. The government then duly played their part in declaring that trading in Bitcoins was illegal, cracking down on several cryptocurrency top guns. Recently different countries have joined the bandwagon to ban Cryptocurrency. Hence the value is much less now. The major concern is that the skepticism in public minds might kill the hype earlier than anticipated. Source: Bitcoin.com The Future and Hope for a better Alternative What we must keep in mind is that Bitcoins are just a derivative of the concept of Cryptocurrencies. The primary concept of distributed systems and the resulting technology - Blockchain, is still a very viable and novel one. The problem in the current Bitcoin system is the distribution mechanism. Whether we would be able to tap into the distributed system concept and create a better version of the Bitcoin model, only time will tell. But for the sake of better wealth propagation and wealth balance, we can only hope that this realignment of economic system happens sooner than later. Blockchain can solve tech’s trust issues – Imran Bashir A brief history of Blockchain Crypto-ML, a machine learning powered cryptocurrency platform
Read more
  • 0
  • 0
  • 20157
Modal Close icon
Modal Close icon