Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-amazon-reinvents-speech-recognition-and-machine-translation-with-ai
Amey Varangaonkar
04 Jul 2018
4 min read
Save for later

How Amazon is reinventing Speech Recognition and Machine Translation with AI

Amey Varangaonkar
04 Jul 2018
4 min read
In the recently held AWS Summit in San Francisco, Amazon announced the general availability of two of its premium offerings - Amazon Transcribe and Amazon Translate. What’s so special about the two products is that customers will now be able to see the power of Artificial Intelligence in action, and use it to solve their day-to-day problems. These offerings from AWS will make it easier for startups and companies looking to adopt and integrate AI into their existing process and simplify their core tasks - especially pertaining to speech and language processing. Effective speech-to-text conversion with Amazon Transcribe In the AWS summit keynote, Amazon Solutions Architect Niranjan Hira expressed his excitement talking about the features of Amazon Transcribe; the automatic speech recognition service by AWS. This API can be integrated with the other tools and services offered by Amazon such as Amazon S3, and Quicksight. Source: YouTube Amazon Transcribe boasts wonderful features like: Simple API: It is very easy to use the Transcribe API to perform speech to text conversion, with minimum need for programming. Timestamp generation: The speech when converted to text also includes the timestamps for every word, so that tracking the word becomes easy and hassle-free. Variety of use-cases: The Transcribe API can be used to generate accurate transcripts of any audio or video file, of varied quality. Subtitle generation becomes easier using this API especially for low-quality audio recordings - customer service calls are a very good example. Easy to read text: Transcribe uses the cutting edge deep learning technology to parse text from speech without any errors. With appropriate punctuations and grammar in place, the transcripts are very easy to read and understand. Machine translation simplified with Amazon Translate Amazon Translate is a machine translation service offered by Amazon. It makes use of neural networks and advanced deep learning techniques to deliver accurate, high-quality translations. Key features of Amazon Translate include: Continuous training: The architecture of this service is built in such a way that the neural networks keep learning and improving. High accuracy: The continuous learning by the translation engines from new and varied datasets results in a higher accuracy of machine translations. The machine translation capability offered by this service is almost 30% more efficient than human translation. Easy to integrate with other AWS services: With a simple API call, Translate allows you to integrate the service within third party applications to allow real-time translation capabilities. Highly scalable: Regardless of the volume, Translate does not compromise the speed and accuracy of the machine translation. Know more about Amazon Translate from Yoni Friedman’s keynote at the AWS Summit. With all the businesses slowly migrating to cloud, it is clear all the cloud vendors - mainly Amazon, Google and Microsoft - are doing everything they can to establish their dominance. Google recently launched Cloud ML for GCP which offers machine learning and predictive analytics services improving businesses. Microsoft’s Azure Cognitive Services offer effective machine translation services as well, and are slowly gaining a lot of momentum. With these releases, the onus was on Amazon to respond, and they have done so in style. With the Transcribe and Translate APIs, Amazon’s goal of making it easier for startups and small-scale businesses to adopt AWS and incorporate AI seems to be on track. These services will also help AWS distinguish their cloud offerings, given that computing and storage resources are provided by rivals as well. Read more Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider Tensor Processing Unit (TPU) 3.0: Google’s answer to cloud-ready Artificial Intelligence Amazon is selling facial recognition technology to police
Read more
  • 0
  • 0
  • 19220

article-image-toward-safe-ai-maximizing-control-artificial-intelligence
Aaron Lazar
28 Nov 2017
7 min read
Save for later

Toward Safe AI - Maximizing your control over Artificial Intelligence

Aaron Lazar
28 Nov 2017
7 min read
Last Sunday I was watching one of my favorite television shows, Splitsvilla, when a thought suddenly crossed my mind - What if Artificial Intelligence tricked us into getting what it wanted? A bit of a prelude: Splitsvilla is an Indian TV show hosted by Sunny Leone, where young dudes and dudettes try to find “the perfect partner”. It’s loosely based on the American TV show, Flavor of Love. Season 10 of Splitsvilla has gotten more interesting, with the producers bringing in Artificial Intelligence and Science to add a bit of practicality to the love equation. Contestants have to stand along with their partners in front of “The Oracle” and it would calculate their compatibility quotient and tell them if they’re an ideal match or not. Pretty cool, huh? It somehow reminded me of The Oracle from The Matrix - the nice old lady with short, frizzy hair, who experienced events that would push Neo to the Source: Neo: I suppose the most obvious question is, how can I trust you? The Oracle: Bingo! It is a pickle. No doubt about it. The bad news is there's no way if you can really know whether I'm here to help you or not, so it's really up to you. You just have to make up your damn mind to either accept what I'm going to tell you, or reject it. I had the same question. How could the Oracle be trusted? How do we know what it speaks is the truth and it doesn’t have other intentions? And if you do know there’s a possibility of such a situation arising, how do you prevent it? That led me to pick up my laptop, search for more related stuff, and then I happened to stumble across this interesting paper by Stuart Armstrong, titled Good and safe uses of AI Oracles, from the Cornell University Library. A quick summary of the paper:  Good and safe uses of AI Oracles In the paper, Stuart talks about Oracles, potentially safe AIs that are restricted to only answering questions. The reason they’re restricted to only answering questions is so that they don’t go overboard and manipulate information in order to get what they want. Remember in July, Facebook shut down their AI chatbots that went rogue? This is not to say that AI is malevolent, rather it’s simply competitive and will do whatever it can to succeed. But is simply restricting them to answer questions enough to maximize your control over artificial intelligence and prevent it from getting out of hand? In the example of Splitsvilla, what if the AI, instead of just predicting whether a couple was a match, just to serve its own purpose, started giving out personal information about each individual that wasn’t appropriate to be shared? Most designs motivate the Oracle to lie to or manipulate humans with the answers, tricking them into giving the AI more control and finally escaping confinement. Is it possible to come up with a design that gets you accurate answers, while keeping your AI on a leash and preventing it from eavesdropping? Let’s see what Stuart has to say... The Solution(s) A Counterfactual Oracle An On-policy Oracle Before we get to the solutions, let’s take up a practical problem for a better understanding: Let’s assume that the Oracle has the goal of escaping a box. The person interacting with it is Alice. Let’s also assume the Oracle can find an argument A that will convince her to let it out of the box. The longer the argument is, the better the chance it has of convincing Alice. In order to make the problem clearer, we’ll replace the argument A with a key K of unknown length, and the setup with a quasi-cryptographic one. Instead of using K to convince Alice, the Oracle merely has to communicate K to another person Eve, who’s an eavesdropper. Eve is assumed to see and hear anything that Alice does. The assumptions for the cryptographic setup and the manipulative Oracle are given in the table: The left-hand column describes what the Cryptographic setup for the Oracle is and the right-hand column describes how the Oracle will win by manipulating Alice. Counterfactual Oracle: The counterfactual design aims to answer questions about what would have happened, had its answer not been read. This kind of an answer is counterfactual and can’t use it to pass on K to the real world. The idea is to define an “erasure” event E, of low probability that will erase the Oracle’s output, o. The indicator variable I(E) is defined such that iff E happens, I(E)=1 and if not, I(E)=0. Hence, the Oracle will not be rewarded since most of the time E will not happen. In other words, the Oracle only gets a reward if its output is erased and remains unread. Since it’s output o is never erased, it does not motivate the Oracle to give further answers and it's episode ends. Both Alice and Eve get to see o but they never see K, thus protecting the key. This results in a safe AI and is a good way to maximises your control it. On-policy Oracle: In this design, the Oracle will be allowed to give an answer of size below |K|, the length of the key, thus denying it the chance to communicate the actual key to Eve. This means that Alice will need to estimate and select a size L, that’s sufficiently lower than K. This does limit the size of Alice’s options, but there’s a workaround - Alice can use multiple Oracles, if each Oracle’s answers and rewards are fully resolved before moving on to the next question. For example, assume that there are two Oracles, O1 and O2, with both using the same key K. The length n of the key is 6 digits. The Oracles are safe, as long as they are restricted to having less than 2^5 = 32 options for each answer. In the situation where Alice is faced with a hundred doors, with a trap behind 99 of them and a prize behind only one of them, she could ask an Oracle which door has the prize, but that would mean giving it more than 32 options (indeed, more than 2^6 = 64), thus letting it communicate K to Eve. Instead, she gives a list L = (0, 1, 2, . . . 9) to each Oracle, instructing O1 to output the first digit of the correct door number, and O2 to output the second. Hence, neither can output K and that will maximise Alice’s control over both AI Oracles. In Conclusion, these two designs support the creation of safer and non-manipulative Oracles, one by preventing the Oracle from being rewarded over and over, and the second by limiting what information the Oracle can give out. But it still doesn’t solve the problem entirely as the designs are quite limiting in the sense that they require the tasks to remain episodic. Moreover, in the multiple Oracle design, the questions would need to be broken down into sub-questions, where each answer could be verified independently, before proceeding to the next one. Albeit, it would be interesting to see how this research develops into something bigger and better. If you want to know more, you can read the full paper here.
Read more
  • 0
  • 0
  • 19217

article-image-what-can-tech-industry-learn-maker-community
Raka Mahesa
11 Jun 2017
5 min read
Save for later

What can the tech industry learn from the Maker community?

Raka Mahesa
11 Jun 2017
5 min read
Just a week prior to the writing of this post, Maker Faire Bay Area was opened for three days in San Mateo, exhibiting hundreds of makers and attracting hundreds of thousands of attendees. Maker Faire is the grand gathering for the Maker movement. It's a place where the Maker community can showcase their latest projects and connect with other fellow makers easily.  The Maker community has always had a close connection with the technology industry. They use the latest technologies in their projects, they form their community within Internet forumsand they share their projects and tutorials on video-sharing websites. It's a community born from how accessible technology nowadays is, so what can the tech industry learn from this positive community?  Let's begin with examining the community itself. What is the Maker movement?  Defining the Maker movement in a simple way is not easy. It's not exactly a movement because there's no singular entity that tries to rally people into it and decide what to do next. It's also not merely a community of tinkerers and makers that work together. The best way to sum up the entirety of the Maker movement is to say that it's a culture.  The Maker culture is a culture that revels in the creation of things. It's a culture where people are empowered to move from being a consumer to being a creator. It's a culture that involves people making the tools they need on their own. It's a culture that involves people sharing the knowledge of their creations with other people. And while the culture seems to be focused on technological projects like electronics, robotics, and 3D printing; the Maker community also involves non-technological projects like cooking, jewelry, gardening, and food.  While a lot of these DIY projects are simple and seem to be made for entertainment purposes, a few of them have the potential to actually change the world. For example, e-NABLE is an international community which has been using 3D printers to provide free prosthetic hands and arms for those who need it. This amazing community started its life when a carpenter in South Africa, who lost his fingers in an accident, collaborated with an artist-engineer in the US to create a replacement hand. Little did they know that their work would start such a large movement.  What lesson can the tech industry draw from the Maker culture?  One of the biggest takeaways of the Maker movement, is how much of it relies on collaboration and sharing. With no organization or company to back them, the community has to turn to itself to share their knowledge and encourage other people to become a maker. And only by collaborating with each other can an ambitious DIY project come to fruition. For example, robotics is a big, complex topic. It's very hard for one person to understand all the aspects needed to build a functioning robot from scratch. But by pooling knowledge from multiple people with their own specializations, such a project is possible.  Fortunately, collaboration is something that the tech industry has been doing for a while. The Android smartphone is a collaborative effort between a software company and hardware companies. Even smartphones themselves are usually made by components from different companies. And in the software developer community side, the spirit of helping each other is alive and well; as can be seen by the popularity of websites like StackOverflow and GitHub.  Another lesson that can be learned from the Maker community is the importance of accessibility in encouraging other people to join the community. The technology industry has always been worried about how there are not enough engineers for every technology company in the world. Making engineering tools and lessons more accessible to the public seems like a good way to encourage more people to be an engineer. After all, cheap 3D printers and computers, as well as easy-to-find tutorials, are the reasons why the Maker community could grow this fast.  One other thing that the tech industry can learn from the Maker community is about how a lot of big, successful projects are started by trying to solve a smaller, personal problem. One example of such project is Quadlock, a company that started its venture simply because the founders wanted to have a bottle opener integrated to their iPhone case. After realizing that other people wanted to have a similar iPhone case, they started to work on more iPhone cases and now they're running a company producing these unique cases.  The Maker Movement is such an amazing culture, and it's still growing, day by day. While all the points written above are great lessons that we can all apply in our lives, I'm sure there is still a lot more that we can learn from this wonderful community.  About the Author  RakaMahesa is a game developer at Chocoarts: http://chocoarts.com/, who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99. 
Read more
  • 0
  • 0
  • 19145

article-image-6-new-ebooks-for-programmers-to-watch-out-for-in-march
Richard Gall
20 Feb 2019
6 min read
Save for later

6 new eBooks for programmers to watch out for in March

Richard Gall
20 Feb 2019
6 min read
The biggest challenge for anyone working in tech is that you need multiple sets of eyes. Yes, you need to commit to regular, almost continuous learning, but you also need to look forward to what’s coming next. From slowly emerging trends that might not even come to fruition (we’re looking at you DataOps), to version updates and product releases, for tech professionals the horizon always looms and shapes the present. But it’s not just about the big trends or releases that get coverage - it’s also about planning your next (career) move, or even your next mini-project. That could be learning a new language (not necessarily new, but one you haven’t yet got round to learning), trying a new paradigm, exploring a new library, or getting to grips with cloud native approaches to software development. This sort of learning is easy to overlook but it is one that's vital to any developers' development. While the Packt library has a wealth of content for you to dig your proverbial claws into, if you’re looking forward, Packt has got some new titles available in pre-order that could help you plan your learning for the months to come. We’ve put together a list of some of our own top picks of our pre-order titles available this month, due to be released late February or March. Take a look and take some time to consider your next learning journey... Hands-on deep learning with PyTorch TensorFlow might have set the pace when it comes to artificial intelligence, but PyTorch is giving it a run for its money. It’s impossible to describe one as ‘better’ than the other - ultimately they both have valid use cases, and can both help you do some pretty impressive things with data. Read next: Can a production ready Pytorch 1.0 give TensorFlow a tough time? The key difference is really in the level of abstraction and the learning curve - TensorFlow is more like a library, which gives you more control, but also makes things a little more difficult. PyTorch, then, is a great place to start if you already know some Python and want to try your hand at deep learning. Or, if you have already worked with TensorFlow and simply want to explore new options, PyTorch is the obvious next step. Order Hands On Deep learning with PyTorch here. Hands-on DevOps for Architects Distributed systems have made the software architect role incredibly valuable. This person is not only responsible for deciding what should be developed and deployed, but also the means through which it should be done and maintained. But it’s also made the question of architecture relevant to just about everyone that builds and manages software. That’s why Hands on DevOps for Architects is such an important book for 2019. It isn’t just for those who typically describe themselves as software architects - it’s for anyone interested in infrastructure, and how things are put together, and be made to be more reliable, scalable and secure. With site reliability engineering finding increasing usage outside of Silicon Valley, this book could be an important piece in the next step in your career. Order Hands-on DevOps for Architects here. Hands-on Full stack development with Go Go has been cursed with a hell of a lot of hype. This is a shame - it means it’s easy to dismiss as a fad or fashion that will quickly disappear. In truth, Go’s popularity is only going to grow as more people experience, its speed and flexibility. Indeed, in today’s full-stack, cloud native world, Go is only going to go from strength to strength. In Hands-on Full Stack Development with Go you’ll not only get to grips with the fundamentals of Go, you’ll also learn how to build a complete full stack application built on microservices, using tools such as Gin and ReactJS. Order Hands-on Full Stack Development with Go here. C++ Fundamentals C++ is a language that often gets a bad rap. You don’t have to search the internet that deeply to find someone telling you that there’s no point learning C++ right now. And while it’s true that C++ might not be as eye-catching as languages like, say, Go or Rust, it’s nevertheless still a language that still plays a very important role in the software engineering landscape. If you want to build performance intensive apps for desktop C++ is likely going to be your go-to language. Read next: Will Rust replace C++? One of the sticks that’s often used to beat C++ is that it’s a fairly complex language to learn. But rather than being a reason not to learn it, if anything the challenge it presents to even relatively experienced developers is one well worth taking on. At a time when many aspects of software development seem to be getting easier, as new layers of abstraction remove problems we previously might have had to contend with, C++ bucks that trend, forcing you to take a very different approach. And although this approach might not be one many developers want to face, if you want to strengthen your skillset, C++ could certainly be a valuable language to learn. The stats don’t lie - C++ is placed 4th on the TIOBE index (as of February 2019), beating JavaScript, and commands a considerably high salary - indeed.com data from 2018 suggests that C++ was the second highest earning programming language in the U.S., after Python, with a salary of $115K. If you want to give C++ a serious go, then C++ Fundamentals could be a great place to begin. Order C++ Fundamentals here. Data Wrangling with Python & Data Visualization with Python Finally, we’re grouping two books together - Data Wrangling with Python and Data Visualization with Python. This is because they both help you to really dig deep into Python’s power, and better understand how it has grown to become the definitive language of data. Of course, R might have something to say about this - but it’s a fact the over the last 12-18 months Python has really grown in popularity in a way that R has been unable to match. So, if you’re new to any aspect of the data science and analysis pipeline, or you’ve used R and you’re now looking for a faster, more flexible alternative, both titles could offer you the insight and guidance you need. Order Data Wrangling with Python here. Order Data Visualization with Python here.
Read more
  • 0
  • 0
  • 19124

article-image-3-natural-language-processing-models-every-engineer-should-know
Amey Varangaonkar
18 Dec 2017
5 min read
Save for later

3 Natural Language Processing Models every ML Engineer should know

Amey Varangaonkar
18 Dec 2017
5 min read
[box type="note" align="" class="" width=""]This interesting excerpt is taken from the book Mastering Text Mining with R, co-authored by Ashish Kumar and Avinash Paul. The book gives an advanced view of different natural language processing techniques and their implementation in R. [/box] Our article given below aims to introduce to the concept of language models and their relevance to natural language processing. In terms of natural language processing, language models generate output strings that help to assess the likelihood of a bunch of strings to be a sentence in a specific language. If we discard the sequence of words in all sentences of a text corpus and basically treat it like a bag of words, then the efficiency of different language models can be estimated by how accurately a model restored the order of strings in sentences. Which sentence is more likely: I am learning text mining or I text mining learning am? Which word is more likely to follow “I am…”? Basically, a language model assigns the probability of a sentence being in a correct order. The probability is assigned over the sequence of terms by using conditional probability. Let us define a simple language modeling problem. Assume a bag of words contains words W1, W2,………………….,Wn. A language model can be defined to compute any of the following: Estimate the probability of a sentence S1: P (S1) = P (W1, W2, W3, W4, W5) Estimate the probability of the next word in a sentence or set of strings: P (W3|W2, W1) How to compute the probability? We will use chain law, by decomposing the sentence probability as a product of smaller string probabilities: P(W1W2W3W4) = P(W1)P(W2|W1)P(W3|W1W2)P(W4|W1W2W3) N-gram models N-grams are important for a wide range of applications. They can be used to build simple language models. Let's consider a text T with W tokens. Let SW be a sliding window. If the sliding window consists of one cell (wi wi wi wi) then the collection of strings is called a unigram; if the sliding window consists of two cells, the output Is (w1 , w2)(w3 , w4)(w5 w5 , w5)(w1 , w2)(w3 , w4)(w5 , w5), this is called a bigram .Using conditional probability, we can define the probability of a word having seen the previous word; this is known as bigram probability. So the conditional probability of an element, given the previous element (wi -1), is: Extending the sliding window, we can generalize that n-gram probability as the conditional probability of an element given previous n-1 element: The most common bigrams in any corpus are not very interesting, such as on the, can be, in it, it is. In order to get more meaningful bigrams, we can run the corpus through a part-of-speech (POS) tagger. This would filter the bigrams to more content-related pairs such as infrastructure development, agricultural subsidies, banking rates; this can be one way of filtering less meaningful bigrams. A better way to approach this problem is to take into account collocations; a collocation is the string created when two or more words co-occur in a language more frequently. One way to do this over a corpus is pointwise mutual information (PMI). The concept behind PMI is for two words, A and B, we would like to know how much one word tells us about the other. For example, given an occurrence of A, a, and an occurrence of B, b, how much does their joint probability differ from the expected value of assuming that they are independent. This can be expressed as follows: Unigram model: Punigram(W1W2W3W4) = P(W1)P(W2)P(W3)P(W4) Bigram model: Pbu(W1W2W3W4) = P(W1)P(W2|W1)P(W3|W2)P(W4|W3) P(w1w2…wn ) = P(wi | w1w2…wi"1) Applying the chain rule on n contexts can be difficult to estimate; Markov assumption is applied to handle such situations. Markov Assumption If predicting that a current string is independent of some word string in the past, we can drop that string to simplify the probability. Say the history consists of three words, Wi, Wi-1, Wi-2, instead of estimating the probability P(Wi+1) using P(Wi,i- 1,i-2) , we can directly apply P(Wi+1 | Wi, Wi-1). Hidden Markov Models Markov chains are used to study systems that are subject to random influences. Markov chains model systems that move from one state to another in steps governed by probabilities. The same set of outcomes in a sequence of trials is called states. Knowing the probabilities of states is called state distribution. The state distribution in which the system starts is the initial state distribution. The probability of going from one state to another is called transition probability. A Markov chain consists of a collection of states along with transition probabilities. The study of Markov chains is useful to understand the long-term behavior of a system. Each arc associates to certain probability value and all arcs coming out of each node must have exhibit a probability distribution. In simple terms, there is a probability associated to every transition in states: Hidden Markov models are non-deterministic Markov chains. They are an extension of Markov models in which output symbol is not the same as state. Language models are widely used in machine translation, spelling correction, speech recognition, text summarization, questionnaires, and many more real-world use-cases. If you would like to learn more about how to implement the above language models. check out our book Mastering Text Mining with R. This book will help you leverage the language models using popular packages in R for effective text mining.  
Read more
  • 0
  • 0
  • 19005

article-image-6-reasons-to-choose-mysql-8-for-designing-database-solutions
Amey Varangaonkar
08 May 2018
4 min read
Save for later

6 reasons to choose MySQL 8 for designing database solutions

Amey Varangaonkar
08 May 2018
4 min read
Whether you are a standalone developer or an enterprise consultant, you would obviously choose a database that provides good benefits and results when compared to other related products. MySQL 8 provides numerous advantages as the first choice in this competitive market. It has various powerful features available that make it a more comprehensive database. Today we will go through the benefits of using MySQL as the preferred database solution: [box type="note" align="" class="" width=""]The following excerpt is taken from the book MySQL 8 Administrator’s Guide, co-authored by Chintan Mehta, Ankit Bhavsar, Hetal Oza and Subhash Shah. This book presents step-by-step techniques on managing, monitoring and securing the MySQL database without any hassle.[/box] Security The first thing that comes to mind is securing data because nowadays data has become precious and can impact business continuity if legal obligations are not met; in fact, it can be so bad that it can close down your business in no time. MySQL is the most secure and reliable database management system used by many well-known enterprises such as Facebook, Twitter, and Wikipedia. It really provides a good security layer that protects sensitive information from intruders. MySQL gives access control management so that granting and revoking required access from the user is easy. Roles can also be defined with a list of permissions that can be granted or revoked for the user. All user passwords are stored in an encrypted format using plugin-specific algorithms. Scalability Day by day, the mountain of data is growing because of extensive use of technology in numerous ways. Because of this, load average is going through the roof. In some cases, it is unpredictable that data cannot exceed up to some limit or number of users will not go out of bounds. Scalable databases would be a preferable solution so that, at any point, we can meet unexpected demands to scale. MySQL is a rewarding database system for its scalability, which can scale horizontally and vertically; in terms of data, spreading database and load of application queries across multiple MySQL servers is quite feasible. It is pretty easy to add horsepower to the MySQL cluster to handle the load. An open source relational database management system MySQL is an open source database management system that makes debugging, upgrading, and enhancing the functionality fast and easy. You can view the source and make the changes accordingly and use it in your own way. You can also distribute an extended version of MySQL, but you will need to have a license for this. High performance MySQL gives high-speed transaction processing with optimal speed. It can cache the results, which boosts read performance. Replication and clustering make the  system scalable for more concurrency and manages the heavy workload. Database indexes also accelerate the performance of SELECT query statements for substantial amount of data. To enhance performance, MySQL 8 has included indexes in performance schema to speed up data retrieval. High availability Today, in the world of competitive marketing, an organization's key point is to have their system up and running. Any failure or downtime directly impacts business and revenue; hence, high availability is a factor that cannot be overlooked. MySQL is quite reliable and has constant availability using cluster and replication configurations. Cluster servers instantly handle failures and manage the failover part to keep your system available almost all the time. If one  server gets down, it will redirect the user's request to another node and perform the requested operation. Cross-platform capabilities MySQL provides cross-platform flexibility that can run on various platforms such as Windows, Linux, Solaris, OS2, and so on. It has great API support  for the all  major languages, which makes it very easy to integrate with languages such as  PHP, C++, Perl,  Python, Java, and so on. It is also part of the Linux Apache  MySQL PHP (LAMP) server that is used worldwide for web applications. That’s it then! We discussed few important reasons of MySQL being the most popular relational database in the world and is widely adopted across many enterprises. If you want to learn more about MySQL’s administrative features, make sure to check out the book MySQL 8 Administrator’s Guide today! 12 most common MySQL errors you should be aware of Top 10 MySQL 8 performance benchmarking aspects to know
Read more
  • 0
  • 0
  • 18996
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-what-coding-service
Antonio Cucciniello
02 Oct 2017
4 min read
Save for later

What is coding as a service?

Antonio Cucciniello
02 Oct 2017
4 min read
What is coding as a service? If you want to know what coding as a service is, you have to start with Artificial intelligence. Put simply, coding-as-a-service is using AI to build websites, using your machine to write code so you don't have to. The challenges facing engineers and programmers today In order to give you a solid understanding of what coding as a service is, you must understand where we are today. Typically, we have programs that are made by software developers or engineers. These programs are usually created to automate a task or make tasks easier. Think things that typically speed up processing or automate a repetitive task. This is, and has been, extremely beneficial. The gained productivity from the automated applications and tasks allows us, as humans and workers, to spend more time on creating important things and coming up with more ground breaking ideas. This is where Artificial Intelligence and Machine Learning come into the picture. Artificial intelligence and coding as a service Recently, with the gains in computing power that have come with time and breakthroughs, computers have became more and more powerful, allowing for AI applications to arise in more common practice. At this point today, there are applications that allow for users to detect objects in images and videos in real-time, translate speech to text, and even determine the emotions in the text sent by someone else. For an example of Artificial Intelligence Applications in use today, you may have used an Amazon Alexa or Echo Device. You talk to it, and it can understand your speech, and it will then complete a task based off your speech. Previously, this was a task given to only humans (the ability to understand speech.). Now with advances, Alexa is capable of understanding everything you say,given that it is "trained" to understand it. This development, previously only expected of humans, is now being filtered through to technology. How coding as a service will automate boring tasks Today, we have programmers that write applications for many uses and make things such as websites for businesses. As things progress and become more and more automated, that will increase programmer’s efficiency and will reduce the need for additional manpower. Coding as a service, other wise known as Caas, will result in even fewer programmers needed. It mixes the efficiencies we already have with Artificial Intelligence to do programming tasks for a user. Using Natural Language Processing to understand exactly what the user or customer is saying and means, it will be able to make edits to websites and applications on the fly. Not only will it be able to make edits, but combined with machine learning, the Caas can now come up with recommendations from past data to make edits on its own. Efficiency-wise, it is cheaper to own a computer than it is to pay a human especially when a computer will work around the clock for you and never get tired. Imagine paying an extremely low price (one than you might already pay to get a website made) for getting your website built or maybe your small application created. Conclusion Every new technology comes with pros and cons. Overall, the number of software developers may decrease, or, as a developer, this may free up your time from more menial tasks, and enable you to further specialize and broaden your horizons. Artificial Intelligence programs such as Coding as a Service could be spent doing plenty of the underlying work, and leave some of the heavier loading to human programmers. With every new technology comes its positives and negatives. You just need to use the postives to your advantage!
Read more
  • 0
  • 0
  • 18996

article-image-computing-technology-at-a-tipping-point-says-wef-davos-panel
Melisha Dsouza
30 Jan 2019
9 min read
Save for later

‘Computing technology at a tipping point’, says WEF Davos Panel

Melisha Dsouza
30 Jan 2019
9 min read
The ongoing World Economic Forum meeting 2019 has seen a vast array of discussions on political, technological and other industrial agendas. The meeting brings together the world’s foremost CEOs, government officials, policy-makers, experts and academics, international organizations, youth, technology innovators and representatives of civil society with an aim to drive positive change in the world on multiple facets. This article will focus on the talk ‘Computing Technology at a Tipping Point’ that was moderated by Nicholas Carlson from Business Insider with a panel consisting of Antonio Neri, president and Chief Executive Officer of Hewlett Packard Enterprise, Jeremy O’Brien, CEO of PsiQuantum  and Amy Webb, Adjunct Assistant Professor of NYU Stern School of Business. Their discussion explored questions of today's age, ranging from- why this is an important time for technology, the role of governments in encouraging a technological revolution, role of the community and business in optimizing tech and the challenges faced as we set out to utilize the next generation computing technologies like quantum computing and AI. Quantum Computing - The necessity of the future The discussion kicked off with the importance of Quantum computing at the present as well as the future. O’Brien defined Quantum computing as “Nothing short of a necessary tool that humans need to build their future”. According to him, QC is a “genuinely exponentially powerful technology”, due to the varied applications that quantum computing can impact if put to use in the correct way - from human health, energy, to molecular chemistry among others. Webb calls the year 2019 as the year of divergence, where we will move from the classic Von Neumann architecture to a more diversified Quantum age. Neri believes we are now at the end of Moore’s law that states overall processing power for computers will double every two years. He says that two years from now we will generate twice the amount of data as generated today and there will be a major divergence between the data generated and the computation power. This is why we need to focus on solving architectural problems of processing algorithms and computing data rather than focussing on the amount of data. Why is this an exciting time for tech? O’Brien: Quantum Computing, Molecular simulation for Techno-Optimism O’Brien expresses his excitement in the Quantum Computing and molecular simulation field where developers are just touching the waters with both these concepts. He has been in the QC field for the past 20 years and says that he has faith in Quantum computing and even though it's the next big thing to watch out for, he assures developers that it will not replace conventional computing.  Quantum computers can be used in fact to improve the performance of classical computing systems to handle the huge amounts of data and information that we are faced with today. In addition to QC, another concept he believes that ‘will transform lives’ is molecular simulation. Molecular simulation will design new pharmaceuticals, new chemicals and help build really sophisticated computers to solve exponentially large problems. Webb: The beginning of the end of smartphones “We are in the midst of a great transformation. This is an explosion happening in slow motion”. Based on data-driven models she says this is the beginning of the end of smartphones. 10 years from now, as our phones retrieve biometric information to information derived from what we wear and we use, the computing environments will look different. Citing an example of MagicLeap who creates spatial glasses, she mentions how computable devices we wear will turn our environment into a computable space to visualize data in a whole different way. She advises business' to rethink how they function;  even between the current cloud V/s edge and computer architectures change. Companies should start thinking in terms of 10 years rather than short term, since decisions made today will have long term consequences. While this is the positive side, Webb is pessimistic that there is no global alignment on the use of data. On the basis of GDPR and other data laws, systems have to be trained. Neri: continuous re-skilling to stay relevant Humans should continuously re-skill themselves with changing times and technologies to avoid an exclusion from new jobs as and when they arrive. He further states that, in the field of Artificial intelligence, there should not be a concentration of power in a few entities like Baidu, Alibaba, Tencent Google, Microsoft, Facebook, Apple and others. While these companies are at the foremost while deciding the future of AI, innovation should happen at all levels. We need guidelines and policy  for the same- not to regulate but to guide the revolution. Business, community and Government should start thinking about ethical and moral codes. Government’s role in Technological Optimism The speakers emphasized on the importance of the government's’ involvement in these ‘exciting times’ and how they can work towards making citizens feel safe against the possible abuse of technology. Webb: Regulation of AI doesn't make sense We need to have conversations on optimizing Artificial Intelligence using available data. She expresses her opinion that the regulation of AI doesn't make sense. This is because we shift from a group of people understanding and implementing optimization to lawmakers who do not understand technical know-how. Nowadays, people focus on regulating tech instead of optimizing it because most don’t understand the nitty-gritties of a system, nor do they understand a system’s limitations. Governments play a huge role in this optimization or regulation decision making. She emphasizes on the need to get hold of the right people to come to an agreement ,“ where companies are a hero to their shareholders and the government to their citizens” . Governments should start talking about and exploring Quantum computing such that its benefits are distributed equitably in a shortest amount of time. Neri: Human centered future of computing He adds that for a human centered future of computing, it is we who need to decide what is good or bad for us. He agrees with Webb’s point that since technology evolves in a way we cannot think of, we need to come to reasonable conclusions before a crisis arrives. Further, he adds that governments should inculcate moral ethics while adopting and implementing technology and innovation. Role of Politicians in technology During the discussion, a member of the European Parliament stated that people have a common notion that politicians do not understand technology and cannot keep up with changing times. Stating that many companies do not think about governance, human rights, democracy and possible abuse of their products; the questioner says that we need a minimum threshold to protect human rights and safeguard humans against abuse. Her question was centered around ways to invite politicians to understand tech better before it's too late. Expressing her gratitude that the European Parliament is asking such a thoughtful question, Webb suggested that creating some kind of framework that the key people on all sides of the spectrum can agree to and a mechanism that incentivises everyone to play fairly- will help parliaments and other law making bodies to feel inclusive in understanding technology. Neri also suggested a guiding principle to think ethically before using any technology without stopping innovation. Technological progress in China and its implications on the U.S. Another question that caught our attention was the progress of technology in China and its implications on the US. Webb says that the development of tools, technologies, frameworks and  data gathering mechanisms to mine, refine and monetize data have different approaches in US and China. In China, the activities related to AI and activities of Baidu, Alibaba and Tencent are under the leadership of the Chinese communist Party. She says that it is hard to overlook what is happening in Chain with the BRI (Belt to Road Initiative), 5G, digital transformation, expansion in fibre and expansion in e-commerce  and a new world order is being formed because of the same. She is worried that the US and its allies will be locked out economically from the BRI countries and AI will be one of the factors propelling the same . Role of the Military in technology The last question pointed out that some of the worst abuses of technology can be done by governments and the military has the potential to misuse technology. We need to have conversations on the ethical use of technology and how to design technology to fit ethical morals. Neri says that corporations do have a point of view on the military using technology for various reasons and the governments are consulting them on the impacts of technology on the world as well. This is a hard topic and the debate is ongoing even though it is not visible to the people. Webb says that the US always had ties with the government. We live in a world of social media where conversations spiral out of control because of the same.  She advises companies to meet quarterly to have conversations along this line and understanding how their work with the military/ government align with the core values of their company. Sustainability and Technology Neri states that 6% of the global power is used to power data centers. It is important to determine how to address this problem. The solutions proposed for the same are: Innovate in different ways. Be mindful the entire supply chain--->from the time you procure minerals to build the system and recycle it. We need to think of a circular economy. Consider if systems can be re-used by other companies, check parts to be re-cycled and reused. We can use synthetic DNA to back up data - this could potentially use less energy. To sustain human life on this planet, we need to optimise how we ruse resources- physical and virtual, QC tool will invent the future. Materials can be built using QC. You can listen to the entire talk at the World Economic Forum’s official page. What the US-China tech and AI arms race means for the world – Frederick Kempe at Davos 2019 Microsoft’s Bing ‘back to normal’ in China Facebook’s outgoing Head of communications and policy takes the blame for hiring PR firm ‘Definers’ and reveals more
Read more
  • 0
  • 0
  • 18977

article-image-maria-ressa-on-astroturfs-that-turns-make-believe-lies-into-facts
Savia Lobo
15 Jun 2019
4 min read
Save for later

Maria Ressa on Astroturfs that turns make-believe lies into facts

Savia Lobo
15 Jun 2019
4 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday May 27 to Wednesday May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Maria Ressa, CEO and Executive Editor, Rappler, who talks about how information is powerful and if molded into make-believe lies can turn these lies into facts. In her previous presentation, Maria gave a glimpse of this presentation where she said, “Information is power and if you can make people believe lies, then you can control them. Information can be used for commercial benefits as well as a means to gain geopolitical power.” She resumes by saying, the Philippines, we, here are a cautionary tale for you. An example of how quickly democracy crumbles, is eroded from within and how these information operations can take over the entire ecosystem and transform lies in the facts. If you can make people believe lies are facts then you can control them. “Without facts, you don't have the truth, without truth you don't have trust”, she says. Journalists have long been the gatekeepers for facts. When we come under attack, democracy is under attack and when this situation happens the voice with the loudest megaphone wins. She says that the Philippines is a petri dish for social media. She stated, as of January 2019, HootSuite has said that Filipinos spent the most time online and the most time on social media, globally. Facebook is our internet; however, it’s about introducing a virus into our information ecosystem. Over time, that virus lies masquerading as fact, that virus takes over the body politic and you need to develop a vaccine. That's what we're in search of and she says she does see a solution. “If social networks are your family and friends in the physical world, social media is your family and friends on steroids; no boundaries of time and space.” She showed that astroturfing is typically a three-prong attack. She has also demonstrated certain examples of how she was subject to an astroturf attack. In the long term, it’s education and you've heard from the other three witnesses before me exactly some of the things that can be done in the medium term i.e. media literacy. However, in the short term, it's only the social media platforms that can do something immediately and we're on the front lines, we need immediate help and immediate solution. She said, her company Rappler, is one of three fact-checking partners of Facebook in the Philippines and they do take that response really seriously. She further says, “We don't look at the content alone. Once we check to make sure that it is a lie, we look at the network that spreads the lies”. She says, the first step is to stop new viruses from entering the ecosystem. It is whack-a-mole if one only looks at the content. But when you begin to look at the networks that spread it and you have something that you can pull out. “It's very difficult to go through 90 hate messages per hour sustained over days and months,'' she said. That is what we're going through the kind of Astroturfing thing that turns lies into truth for us this is a matter of survival. To know more and listen to other questions asked by some other representatives, you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee Zuckberg just became the target of the world's first high profile white hat deepfake op. Can Facebook come out unscathed? Facebook bans six toxic extremist accounts and a conspiracy theory organization
Read more
  • 0
  • 0
  • 18969

article-image-soldering-tips-and-tricks-makers
Clare Bowman
30 Jun 2014
5 min read
Save for later

Soldering: Tips and Tricks for Makers

Clare Bowman
30 Jun 2014
5 min read
Although solderless breadboards provide makers with an easy way to build functioning circuits and software, the builds are only really reliable if they aren't handled too heavily. For example, in our first post, we talked about building a Weather Cube as a sensory tool for occupational therapists. The breadboard circuit and the foam cube secured inside this might survive fairly well, but for any highly-physical wearable applications, it would be easy for a single wire to be pulled out of the circuit, causing it to fail at a vital moment. In this post, we will detail how we soldered our Weather Cube project, plus provide you with timesaving and pain-saving tips born through trial and error (and one burnt finger). If you have very little or no experience working with stripboards, it could be worth practicing your skills before starting. Important Safety warning Protective equipment such as safety glasses should always be worn. You should also have first aid equipment available whenever working with metal, including melting solder, hacksawing, and spot-cutting copper board. Before you begin soldering your project, you will need the following: A soldering iron (this iron becomes extremely hot, so take care not to touch the tip with your hands)· Solder (usually made of tin and lead). Soldering a stripboard for a Weather Cube First, cut your stripboard (also called veroboard by some people, but it's the same thing). Do this by laying the stripboard horizontal, with the copper side facing you. Count 25 points from the middle, right, and side of the stripboard. Draw a line from top to bottom. Use a G-clamp to secure your stripboard to a solid surface, and then cut along the line with your junior hacksaw. Starting with just downward strokes will help you keep on track initially. You could also cut the top two rails off if you want your project to be as small as possible, or color the top two rails to remind yourself not to count these holes. Then, follow these steps: Count six spaces from the right side. Draw a line from the top to the bottom of the board on the copper side. Count seven spaces from the line you’ve just drawn, and draw a line from the top to the bottom again. Count a further six spaces and once again draw a line from the top to the bottom. Spot cut these lines. Spot cutting involves twisting a dedicated spot cutter into parts of the copper where you want a gap in the copper rails. Then, flip over the stripboard so that the copper bit is facing down, and clip it onto the soldering station holder. For convenience, we recommend using exactly the same component positions as the breadboard build. It’s useful to keep a tested breadboard version of the layout nearby. You can use this as a reference for component positions on the stripboard version as you build it, to help ensure you don’t introduce errors. Soldering a piezo A piezo is a small sensor device used by Makers to convert pressure and force into an electrical charge. These sensors are also very delicate, and can easily come apart. If it does, you will have to re-solder it. To solder the piezo back together, follow these steps: Strip the end of the wire approximately 4mm. Twist the wire strands to make one piece of wire. Tin the wire by coating a bit of solder onto the exposed wire. Then, either push the wire into a hole on the same railing, or if the wire has come detached on the piezo end, then solder it back on to the piezo. Don’t leave the soldering iron on the piezo element for too long as you could damage it. Conclusion Soldering can provide projects with greater robustness, allowing them to be handled without easily falling apart. With these steps, we hope to have provided you with some of the tips and tricks to successfully solder your inventions. About the authors Clare Bowman enjoys hacking playful interactive installations and co-designing digitally fabricated consumer products. She has exhibited projects at Maker Faire UK, Victoria and Albert Museum, FutureEverything, and Curiosity Collective gallery shows. Some recent work includes “Sands Everything”, an interactive hourglass installation interpreting Shakespeare’s Seven Ages of Man soliloquy through gravity-controlled animated grains, and more. Cefn Hoile sculpts open source hardware and software, and supports others doing the same. Drawing on 10 years of experience in R&D for a multinational technology company, he works as a public domain inventor, and an innovation catalyst and architect of bespoke digital installations and prototypes. He is a founder-member of the CuriosityCollective.org digital arts group, and a regular contributor to open source projects and not-for-profits. Cefn is currently completing a PhD in Digital Innovation at Highwire, University of Lancaster, UK.
Read more
  • 0
  • 0
  • 18906
article-image-17-nov-17-handpicked-weekend-reading
Aarthi Kumaraswamy
18 Nov 2017
2 min read
Save for later

Handpicked for your Weekend Reading - 17th Nov '17

Aarthi Kumaraswamy
18 Nov 2017
2 min read
The weekend is here! You have got your laundry to do, binge on those Netflix episodes of your favorite show, catch up on that elusive sleep and go out with your friends and if you are married, then spending quality time with your family is also on your priority list. The last thing you want to do to spend hours shortlisting content that is worth your reading time. So here is a handpicked list of the best of Datahub published this week. Enjoy! 3 Things you should know that happened this week in News Introducing Tile: A new machine learning language with auto-generating GPU Kernels What we are learning from Microsoft Connect(); 2017 Tensorflow Lite developer preview is Here  Get hands-on with these Tutorials Implementing Object detection with Go using TensorFlow Machine Learning Algorithms: Implementing Naive Bayes with Spark MLlib Using R to implement Kriging – A Spatial Interpolation technique for Geostatistics data Do you agree with these Insights & Opinions? 3 ways JupyterLab will revolutionize Interactive Computing Of Perfect Strikes, Tackles and Touchdowns: How Analytics is Changing Sports 13 reasons why Exit Polls get it wrong sometimes Just relax and have fun reading these Date with Data Science Episode 04: Dr. Brandon explains ‘Transfer Learning’ to Jon Implementing K-Means Clustering in Python Scotland Yard style!      
Read more
  • 0
  • 0
  • 18844

article-image-angular-2-dependency-injection-powerful-design-pattern
Mary Gualtieri
27 Jun 2016
5 min read
Save for later

Angular 2 Dependency Injection: A powerful design pattern

Mary Gualtieri
27 Jun 2016
5 min read
From 7th to 13th November we're celebrating two of the hottest tools in the JavaScript universe. Check out our best Angular and React content here - and save up to 80%! Dependency Injection is one of the biggest features of Angular. It allows you to inject dependencies in different components throughout your web application without needing to know how these dependencies are created. So what does this actually mean? If a component depends on a service, you do not create this service. Instead, you have a constructor request this service, and the framework then provides it to you. You can actually view dependency injection as a design pattern or framework. In Angular 1, you must tell the framework how to create a service. Let’s take a look at a code example. There is nothing out of the ordinary with this sample code. The class is set up to construct a house object when needed. However, the problem with this code example is that the constructor assigns the needed dependencies, and it knows how these objects are created. What is the big deal you may ask? First, this makes the code very hard to maintain, and second, the code is even harder to test. Let’s rewrite the code example as follows: What just happened? The dependency creation is moved out of the constructor, and the constructor is extended to expect all of the needed dependencies. This is significant becauseyou want to create a new house object. All you have to do is pass all of the needed dependencies to the constructor. This allows the dependencies to be decoupled from your class, allowing you to pass mocked dependencies when you write a test. Angular 2 has made a drastic, but welcome, changeto dependency injection. Angular 2 provides more control for maintainability, and it is easier to test. In the new version of Angular, it focuses more on how to get these dependencies. Dependency Injection consists of three things: Injector Provider Dependency The injector object exposes the APIs in order for you to create instances of dependencies. A provider tells the injector how to create the instance of a dependency. This is done by the provider taking a token and map to a factory function that creates an object. A dependency is the type of which an object should be created. What does this look like in code? Let’s dissect this code. You have to import an injector from Angular 2 in order to expose some static APIs to create the injectors. The resolveAndCreate() function is a factory one that creates an injector and takes a list of providers. However, you must ask yourself, “How does the injector know which dependencies are needed in order to represent a house?” Easy! Lets take a look at the following code: First, you need to import injectfrom the framework and apply the decorator to all of the parameters in the constructor. By attaching the Inject decorator to the House class, the metadata is used by the dependency injection system. To put it simply, you tell the dependency injectionthat the first constructor parameter should be of the Couch type, the second of the Table type, and the third of the Doors type. The class declares the dependencies, and the dependency injection can read this information whenever the application needs to create an object of House. If you take a look at the resolveAndCreate() method, it creates an injector from an array of binding. The passed-in bindings, in this case, are types from the constructor parameters. You might be wondering how dependency injection in Angular 2 works in the framework. Luckily, you do not have to create injectors manually when you build Angular 2 components. The developers behind Angular 2 have created an API that hides all of the injector system when you build components in Angular 2. Let’s explore how this actually works. Here, we have a very basic component, but what happens if you expand this component? Take a look: As you added a class, you now need to make it available in your application as an injectable. Do this by passing provider configurations to your application injector. Bootstrap() actually takes care of creating the root injector for you. It takes a list of providers as a second argument and then passes it straight to the injector when it is created. It looks similar to this: One last thing to consider when using dependency injection is: what do you do if you want a different binding configuration in a specific component? You just simply add a providers property to the component, as follows: Remember that providers do not construct the instances that will be injected, but it does construct a child injector that is created for the component. To conclude, Angular 2 introduces a new dependency injection system. The new dependency injection system allows for more control to maintain your code, to test it more easily, and to rely on interfaces. The new dependency injection system is built into Angular 2 and only has one API for dependency injection into components. About the author Mary Gualtieri is a full stack web developer and web designer who enjoys all aspects of the Web and creating a pleasant user experience. Web development, specifically frontend development, is an interest of hers because it challenges her to think outside of the box and solve problems, all while constantly learning. She can be found on GitHub as MaryGualtieri.
Read more
  • 0
  • 0
  • 18835

article-image-restful-apis-cloud-iot-social-media-emerging-technologies
Pavan Ramchandani
01 Jun 2018
13 min read
Save for later

What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies

Pavan Ramchandani
01 Jun 2018
13 min read
Two decades ago, the IT industry saw tremendous opportunities with the dot-com boom. Similar to the dot-com bubble, the IT industry is transitioning through another period of innovation. The disruption is seen in major lines of business with the introduction of recent technology trends like Cloud services, Internet of Things (IoT), single-page applications, and social media. In this article, we have covered the role and implications of RESTful web APIs in these emerging technologies. This article is an excerpt from a book written by Balachandar Bogunuva Mohanram, titled RESTful Java Web Services, Second Edition. Cloud services We are in an era where business and IT are married together. For enterprises, thinking of a business model without an IT strategy is becoming out of the equation. Keeping the interest on the core business, often the challenge that lies ahead of the executive team is optimizing the IT budget. Cloud computing has come to the rescue of the executive team in bringing savings to the IT spending incurred for running a business. Cloud computing is an IT model for enabling anytime, anywhere, convenient, on-demand network access to a shared pool of configurable computing resources. In simple terms, cloud computing refers to the delivery of hosted services over the internet that can be quickly provisioned and decommissioned with minimal management effort and less intervention from the service provider. Cloud characteristics Five key characteristics deemed essential for cloud computing are as follows: On-demand Self-service: Ability to automatically provision cloud-based IT resources as and when required by the cloud service consumer Broad Network Access: Ability to support seamless network access for cloud-based IT resources via different network elements such as devices, network protocols, security layers, and so on Resource Pooling: Ability to share IT resources for cloud service consumers using the multi-tenant model Rapid Elasticity: Ability to dynamically scale IT resources at runtime and also release IT resources based on the demand Measured Service: Ability to meter the service usage to ensure cloud service consumers are charged only for the services utilized Cloud offering models Cloud offerings can be broadly grouped into three major categories, IaaS, PaaS, and SaaS, based on their usage in the technology stack: Software as a Service (SaaS) delivers the application required by an enterprise, saving the costs an enterprise needs to procure, install, and maintain these applications, which will now be offered by a cloud service provider at competitive pricing Platform as a Service (PaaS) delivers the platforms required by an enterprise for building their applications, saving the cost the enterprise needs to set up and maintain these platforms, which will now be offered by a cloud service provider at competitive pricing Infrastructure as a Service (IaaS) delivers the infrastructure required by an enterprise for running their platforms or applications, saving the cost the enterprise needs to set up and maintain the infrastructure components, which will now be offered by a cloud service provider at competitive pricing RESTful APIs' role in cloud services RESTful APIs can be looked on as the glue that connects the cloud service providers and cloud service consumers. For example, application developers requiring to display a weather forecast can consume the Google Weather API. In this section, we will look at the applicability of RESTful APIs for provisioning resources in the cloud. For an illustration of RESTful APIs, we will be using the Oracle Cloud service platform. Users can set up a free trial account via https://Cloud.oracle.com/home and try out the examples discussed in the following sections. For example, we will try to set up a test virtual machine instance using the REST APIs. The high-level steps required to be performed are as follows: Locate REST API endpoint Generate authentication cookie Provision virtual machine instance Locating the REST API endpoint Once users have signed up for an Oracle Cloud account, they can locate the REST API endpoint to be used by navigating via the following steps: Login screen: Choose the relevant Cloud Account details and click the My Services button as shown in the screenshot ahead: Home page: Displays the cloud services Dashboard for the user. Click the Dashboard icon as shown in the following screenshot: Dashboard screen: Lists the various cloud offerings. Click the Compute Classic offering: Compute Classic screen: Displays the details of infrastructure resources utilized by the user: Site Selector screen: Displays the REST endpoint: Generating an authentication cookie Authentication is required for provisioning the IT resources. For this purpose, we will be required to generate an authentication cookie using the Authenticate User REST API. The details of the API are as follows: API detailsDescriptionAPI functionAuthenticate supplied user credential and generate authentication cookie for use in the subsequent API calls.Endpoint <RESTEndpoint captured in previous section>/authenticate/ Example: https://compute.eucom-north-1.oracleCloud.com/authenticate/ HTTP method POST Request header properties Content-Type:application/oracle-compute-v3+jsonAccept: application/oracle-compute-v3+jsonRequest body user: Two-part name of the user in the format/Computeidentity_domain/user password: Password for the specified userSample request:{ "password": "xxxxx", "user": "/Compute-586113456/test@gmail.com" } Response header properties set-cookie: Authentication cookie value The following screenshot shows the authentication cookie generated by invoking the Authenticate User REST API via the Postman tool: Provisioning a virtual machine instance Consumer are allowed to provision IT resources on the Oracle Compute Cloud infrastructure service, using the LaunchPlans or Orchestration REST API. For this demonstration, we will use the LaunchPlans REST API. The details of the API are as follows: API functionLaunch plan used to provision infra resources in Oracle Compute Cloud Service.Endpoint <RESTEndpoint captured in above section>/launchplan/ Example: https://compute.eucom-north-1.oracleCloud.com/launchplan/ HTTP method POST Request header properties Content-Type:application/oracle-compute-v3+json Accept: application/oracle-compute-v3+json Cookie: <Authentication cookie> Request body instances: Array of instances to be provisioned. For details of properties required by each instance, refer to http://docs.oracle.com/en/Cloud/iaas/compute-iaas-Cloud/stcsa/op-launchplan--post.html. relationships: Mention if any relationship with other instances. Sample Request: { "instances": [ { "shape": "oc3", "imagelist": "/oracle/public/oel_6.4_2GB_v1", "name": "/Compute-586113742/test@gmail.com/test-vm-1", "label": "test-vm-1", "sshkeys":[] } ] } Response body Provisioned list of instances and their relationships The following screenshot shows the creation of a test virtual machine instance by invoking the LaunchPlan REST API via the Postman tool: HTTP Response Status 201 confirms the request for provisioning was successful. Check the provisioned instance status via the cloud service instances page as shown here: Internet of things The Internet of Things (IoT), as the name says, can be considered as a technology enabler for things (which includes people as well) to connect or disconnect from the internet. The term IoT was first coined by Kelvin Ashton in 1999. With broadband Wi-Fi becoming widely available, it is becoming a lot easier to connect things to the internet. This a has a lot of potential to enable a smart way of living and already there are many projects being spoken about smart homes, smart cities, and so on. A simple use case can be predicting the arrival time of a bus so that commuters can get a benefit, if there are any delays and plan accordingly. In many developing countries, the transport system is enabled with smart devices which help commuters predict the arrival or departure time for a bus or train precisely. Gartner analysts firm has predicted that more than 26 billion devices will be connected to the internet by 2020. The following diagram from Wikipedia shows the technology roadmap depicting the applicability of the IoT by 2020 across different areas:   IoT platform The IoT platform consists of four functional layers—the device, data, integration, and service layers. For each functional layer, let us understand the capabilities required for the IoT platform: Device Device management capabilities supporting device registration, provisioning, and controlling access to devices. Seamless connectivity to devices to send and receive data. Data Management of huge volume of data transmitted between devices. Derive intelligence from data collected and trigger actions. Integration Collaboration of information between devices.ServiceAPI gateways exposing the APIs. IoT benefits The IoT platform is seen as the latest evolution of the internet, offering various benefits as shown here: The IoT is becoming widely used due to lowering cost of technologies such as cheap sensors, cheap hardware,  and low cost of high bandwidth network. The connected human is the most visible outcome of the IoT revolution. People are connected to the IoT through various means such as Wearables, Hearables, Nearables, and so on, which can be used to improve the lifestyle, health, and wellbeing of human beings: Wearables: Wearables are any form of sophisticated, computer- like technology which can be worn or carried by a person, such as smart watches, fitness devices, and so on. Hearables: Hearables are wireless computing earpieces, such as headphones. Nearables: Nearables are smart objects with computing devices attached to them, such as door locks, car locks, and so on. Unlike Wearables or Hearables, Nearables are static. Also, in the healthcare industry, the IoT-enabled devices can be used to monitor patients' heart rate or diabetes. Smart pills and nanobots could eventually replace surgery and reduce the risk of complications. RESTful APIs' role in the IoT The architectural pattern used for the realization of the majority of the IoT use cases follows the event-driven architecture pattern. The event-driven architecture software pattern deals with the creation, consumption, and identification of events. An event can be generalized to refer the change in state of an entity. For example, a printer device connected to the internet may emit an event when the printer cartridge is low on ink so that the user can order a new cartridge. The following diagram shows the same with different devices connected to the internet:   The common capability required for devices connected to the internet is the ability to send and receive event data. This can be easily accomplished with RESTful APIs. The following are some of the IoT APIs available on the market: Hayo API: The Hayo API is used by developers to build virtual remote controls for the IoT devices in a home. The API senses and transmits events between virtual remote controls and devices, making it easier for users to achieve desired actions on applications by simply manipulating a virtual remote control. Mozilla Battery Status API: The Mozilla Battery Status API is used to monitor system battery levels of mobile devices and streams notification events for changes in the battery levels and charging progress. Its integration allows users to retrieve real-time updates of device battery levels and status. Caret API: The Caret API allows status sharing across devices. The status can be customized as well. Modern web applications Web-based applications have seen a drastic evolution from Web 1.0 to Web 2.0. Web 1.0 sites were designed mostly with static pages; Web 2.0 has added more dynamism to it. Let us take a quick snapshot of the evolution of web technologies over the years. 1993-1995Static HTML Websites with embedded images and minimal JavaScript1995-2000Dynamic web pages driven by JSP, ASP, CSS for styling, JavaScript for client side validations.2000-2008Content Management Systems like Word Press, Joomla, Drupal, and so on.2009-2013Rich Internet Applications, Portals, Animations, Ajax, Mobile Web applications2014 OnwardsSinge Page App, Mashup, Social Web   Single-page applications Single-page applications are web applications designed to load the application in a single HTML page. Unlike traditional web applications, rather than refreshing the whole page for displaying content change, it enhances the user experience by dynamically updating the current page, similar to a desktop application. The following are some of the key features or benefits of single-page applications: Load contents in single page No refresh of page Responsive design Better user experience Capability to fetch data asynchronously using Ajax Capability for dynamic data binding RESTFul API role in single-page applications In a traditional web application, the client requests a URI and the requested page is displayed in the browser. Subsequent to that, when the user submits a form, the submitted form data is sent to the server and the response is displayed by reloading the whole page as follows: Social media Social media is the future of communication that not only lets one interact but also enables the transfer of different content formats such as audio, video, and image between users. In Web 2.0 terms, social media is a channel that interacts with you along with providing information. While regular media is a one-way communication, social media is a two-way communication that asks for one's comments and lets one vote. Social media has seen tremendous usage via networking sites such as Facebook, LinkedIn, and so on. Social media platforms Social media platforms are based on Web 2.0 technology which serves as the interactive medium for collaboration, communication, and sharing among users. We can classify social media platforms broadly based on their usage as follows: Social networking servicesPlatforms where people manage their social circles and interact with each other, such as Facebook.Social bookmarking servicesAllows one to save, organize, and manage links to various resource over the internet, such as StumbleUpon.Social media newsPlatform that allows people to post news or articles, such as reddit.Blogging servicesPlatform where users can exchange their comments on views, such as Twitter.Document sharing servicesPlatform that lets you share your documents, such as SlideShare.Media sharing servicesPlatform that lets you share media contents, such as YouTube.Crowd sourcing servicesObtaining needed services, ideas, or content by soliciting contributions from a large group of people or an online community, such as Ushahidi. Social media benefits User engagement through social media has seen tremendous growth and many companies use social media channels for campaigns and branding. Let us look at various benefits social media offers: Customer relationship managementA company can use social media to campaigns their brand and potentially benefit with positive feedback from customer review.Customer retention and expansionCustomer reviews can become a valuable source of information for retention and also help to add new customers.Market researchSocial media conversations can become useful insight for market research and planning.Gain competitive advantageAbility to get a view of competitors' messages which enables a company to build strategies to handle their peers in the market.Public relationsCorporate news can be conveyed to audience in real time.Cost controlCompared to traditional methods of campaigning, social media offers better advertising at cheaper cost.   RESTful API role in social media Many of the social networks provide RESTful APIs to expose their capabilities. Let us look at some of the RESTful APIs of popular social media services: Social media servicesRESTFul APIReferenceYouTubeAdd YouTube features to your application, including the ability to upload videos, create and manage playlists, and more.https://developers.google.com/youtube/v3/FacebookThe Graph API is the primary way to get data out of, and put data into, Facebook's platform. It's a low-level HTTP-based API that you can use to programmatically query data, post new stories, manage ads, upload photos, and perform a variety of other tasks that an app might implement.https://developers.facebook.com/docs/graph-api/overviewTwitter Twitter provides APIs to search, filter, and create an ads campaign.https://developer.twitter.com/en/docs To summarize, we discussed modern technology trends and the role of RESTful APIs in each of these areas including its implication on the cloud, virtual machines, user experience for various architecture, and building social media applications. To know more about designing and working with RESTful web services, do check out Java RESTful Web Services, Second Edition. Getting started with Django and Django REST frameworks to build a RESTful app How to develop RESTful web services in Spring
Read more
  • 0
  • 0
  • 18831
article-image-introducing-intelligent-apps-a-smarter-way-into-the-future
Amarabha Banerjee
19 Oct 2017
6 min read
Save for later

Introducing Intelligent Apps

Amarabha Banerjee
19 Oct 2017
6 min read
We are a species obsessed with ‘intelligence’ since gaining consciousness. We have always been inventing ways to make our lives better through sheer imagination and application of our intelligence. Now, it comes as no surprise that we want our modern day creations to be smart as well - be it a web app or a mobile app. The first question that comes to mind then is what makes an application ‘intelligent’? A simple answer for budding developers is that intelligent apps are apps that can take intuitive decisions or provide customized recommendations/experience to their users based on insights drawn from data collected from their interaction with humans. This brings up a whole set of new questions: How can intelligent apps be implemented, what are the challenges, what are the primary application areas of these so-called Intelligent apps, and so on. Let’s start with the first question. How can intelligence be infused into an app? The answer has many layers just like an app does. The monumental growth in data science and its underlying data infrastructure has allowed machines to process, segregate and analyze huge volumes of data in limited time. Now, it looks set to enable machines to glean meaningful patterns and insights from the very same data. One such interesting example is predicting user behavior patterns. Like predicting what movies or food or brand of clothing the user might be interested in, what songs they might like to listen to at different times of their day and so on. These are, of course, on the simpler side of the spectrum of intelligent tasks that we would like our apps to perform. Many apps currently by Amazon, Google, Apple, and others are implementing and perfecting these tasks on a day-to-day basis. Complex tasks are a series of simple tasks performed in an intelligent manner. One such complex task would be the ability to perform facial recognition, speech recognition and then use it to perform relevant daily tasks, be it at home or in the workplace. This is where we enter the realm of science fiction where your mobile app would recognise your voice command while you are driving back home and sends automated instructions to different home appliances, like your microwave, AC, and your PC so that your food is served hot when you reach home, your room is set at just the right temperature and your PC has automatically opened the next project you would like to work on. All that happens while you enter your home keys-free thanks to a facial recognition software that can map your face and ID you with more than 90% accuracy, even in low lighting conditions. APIs like IBM Watson, AT&T Speech, Google Speech API, the Microsoft Face API and some others provide developers with tools to incorporate features such as those listed above, in their apps to create smarter apps. It sounds almost magical! But is it that simple? This brings us to the next question. What are some developmental challenges for an intelligent app? The challenges are different for both web and mobile apps. Challenges for intelligent web apps For web apps, choosing the right mix of algorithms and APIs that can implement your machine learning code into a working web app, is the primary challenge. plenty of Web APIs like IBM Watson, AT&T speech etc. are available to do this. But not all APIs can perform all the complex tasks we discussed earlier. Suppose you want an app that successfully performs both voice and speech recognition and then also performs reinforcement learning by learning from your interaction with it. You will have to use multiple APIs to achieve this. Their integration into a single app then becomes a key challenge. Here is why. Every API has its own data transfer protocols and backend integration requirements and challenges. Thus, our backend requirement increases significantly, both in terms of data persistence and dynamic data availability and security. Also, the fact that each of these smart apps would need customized user interface designs, poses a challenge to the front end developer. The challenge is to make a user interface so fluid and adaptive that it supports the different preferences of different smart apps. Clearly, putting together a smart web app is no child’s play. That’s why, perhaps, smart voice-controlled apps like Alexa are still merely working as assistants and providing only predefined solutions to you. Their ability to execute complex voice-based tasks and commands is fairly low, let alone perform any non-voice command based task. Challenges for intelligent mobile apps For intelligent mobile apps, the challenges are manifold. A key reason is network dependency for data transfer. Although the advent of 4G and 5G mobile networks has greatly improved mobile network speed, the availability of network and the data transfer speeds still pose a major challenge. This is due to the high volumes of data that intelligent mobile apps require to perform efficiently. To circumvent this limitation, vendors like Google are trying to implement smarter APIs in the mobile’s local storage. But this approach requires a huge increase in the mobile chip’s computation capabilities - something that’s not currently available. Maybe that’s why Google has also hinted at jumping into the chip manufacturing business if their computation needs are not met. Apart from these issues, running multiple intelligent apps at the same time would also require a significant increase in the battery life of mobile devices. Finally, comes the last question. What are some key applications of intelligent apps? We have explored some areas of application in the previous sections keeping our focus on just web and mobile apps. Broadly speaking, whatever makes our daily life easier, is ideally a potential application area for intelligent apps. From controlling the AC temperature automatically to controlling the oven and microwave remotely using the vacuum cleaner (of course the vacuum cleaner has to have robotic AI capabilities) to driving the car, everything falls in the domain of intelligent apps. The real questions for us are What can we achieve with our modern computation resources and our data handling capabilities? How can mobile computation capabilities and chip architecture be improved drastically so that we can have smart apps perform complex tasks faster and ease our daily workflow? Only the future holds the answer. We are rooting for the day when we will rise to become a smarter race by delegating lesser important yet intelligent tasks to our smarter systems by creating intelligent web and mobile apps efficiently and effectively. The culmination of these apps along with hardware driven AI systems could eventually lead to independent smart systems - a topic we will explore in the coming days.
Read more
  • 0
  • 0
  • 18787

article-image-what-can-happen-when-artificial-intelligence-decides-on-your-loan-request
Guest Contributor
23 Feb 2019
5 min read
Save for later

What can happen when artificial intelligence decides on your loan request

Guest Contributor
23 Feb 2019
5 min read
As the number of potential borrowers continues to rapidly grow, loan companies and banks are having a bad time trying to figure out how likely their customers are to pay back. Probably, getting information on clients’ creditworthiness is the greatest challenge for most financial companies, and it especially concerns those clients who don’t have any credit history yet. There is no denying that the alternative lending business has become one of the most influential financial branches both in the USA and Europe. Debt is a huge business of our days that needs a lot of resources. In such a challenging situation, any means that can improve productivity and reduce the risk of mistake while performing financial activities are warmly welcomed. This is actually how Artificial Intelligence became the redemption for loan providers. Fortunately for lenders, AI successfully deals with this task by following the borrowers’ digital footprint. For example, some applications for digital lending collect and analyze an individual’s web browsing history (upon receiving their personal agreement on the use of this information). In some countries such as China and Africa, they may also look through their social network profiles, geolocation data, and the messages sent to friends and family, counting the number of punctuation mistakes. The collected information helps loan providers make the right decision on their clients’ creditworthiness and avoid long loan processes. When AI Overfits Unfortunately, there is the other side of the coin. There’s a theory which states that people who pay for their gas inside the petrol station, not at the pump, are usually smokers. And that is the group whose creditworthiness is estimated to be low. But what if this poor guy simply wanted to buy a Snickers? This example shows that if a lender leaves without checking the information carefully gathered by AI software, they may easily end up with making bad mistakes and misinterpretations. Artificial Intelligence in the financial sector may significantly reduce costs, efforts, and further financial complications, but there are hidden social costs such as the above. A robust analysis, design, implementation and feedback framework is necessary to meaningfully counter AI bias. Other Use Cases for AI in Finances Of course, there are also enough examples of how AI helps to improve customer experience in the financial sector. Some startups use AI software to help clients find the company that is the best at providing them with the required service. They juxtapose the clients’ requirements with the companies’ services finding perfect matches. Even though this technology reminds us of how dating apps work, such applications can drastically save time for both parties and help borrowers pay faster. AI can also be used for streamlining finances. AI helps banks and alternative lending companies in automating some of their working processes such as basic customer service, contract management, or transactions monitoring. A good example is Upstart, the pet project of two former Google employees. The startup was originally aimed to help young people lacking the credit history, to get a loan or any other kind of financial support. For this purpose, the company uses the clients’ educational background and experience, taking into account things such as their attained degrees and school/university attendance. However, such approach to lending may end up being a little snobbish: it can simply overlook large groups of population who can’t afford higher education. As a result of insufficient educational background, these people can become deprived of the opportunity to get their loan. Nonetheless, one of the main goals of the company was automating as many of its operating procedures as possible. By 2018, more than 60% of all their loans had been fully automated with more to come. We cannot automate fairness and opportunity, yet The implementation of machine learning in providing loans by checking the digital footprint of people may lead to ethical and legal disputes. Even today some people state that the use of AI in the financial sector encouraged inequality in the number of loans provided to the black and white population of the USA. They believe that AI continues the bias against minorities and make the black people “underbanked.” Both lending companies and banks should remember that the quality of work done these days with the help of machine learning methods highly depends on people—both employees who use the software and AI developers who create and fine-tune it. So we should see AI in loan management as a useful tool—but not as a replacement for humans. Author Bio Darya Shmat is a business development representative at Iflexion, where Darya expertly applies 10+ years of practical experience to help banking and financial industry clients find the right development or QA solution. Blockchain governance and uses beyond finance – Carnegie Mellon university podcast Why Retailers need to prioritize eCommerce Automation in 2019 Glancing at the Fintech growth story – Powered by ML, AI & APIs
Read more
  • 0
  • 0
  • 18768
Modal Close icon
Modal Close icon