Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Data

281 Articles
article-image-everything-know-ethereum
Packt Editorial Staff
10 Apr 2018
8 min read
Save for later

Everything you need to know about Ethereum

Packt Editorial Staff
10 Apr 2018
8 min read
Ethereum was first conceived of by Vitalik Buterin in November 2013. The critical idea proposed was the development of a Turing-complete language that allows the development of arbitrary programs (smart contracts) for Blockchain and decentralized applications. This concept is in contrast to Bitcoin, where the scripting language is limited in nature and allows necessary operations only. This is an excerpt from the second edition of Mastering Blockchain by Imram Bashir. The following table shows all the releases of Ethereum starting from the first release to the planned final release: Version Release date Olympic May, 2015 Frontier July 30, 2015 Homestead March 14, 2016 Byzantium (first phase of Metropolis) October 16, 2017 Metropolis To be released Serenity (final version of Ethereum) To be released   The first version of Ethereum, called Olympic, was released in May, 2015. Two months later, a second version was released, called Frontier. After about a year, another version named Homestead with various improvements was released in March, 2016. The latest Ethereum release is called Byzantium. This is the first part of the development phase called Metropolis. This release implemented a planned hard fork at block number 4,370,000 on October 16, 2017. The second part of this release called Constantinople is expected in 2018 but there is no exact time frame available yet. The final planned release of Ethereum is called Serenity. It's planned for Serenity to introduce the final version of PoS based blockchain instead of PoW. The yellow paper The Yellow Paper, written by Dr. Gavin Wood, serves as a formal definition of the Ethereum protocol. Anyone can implement an Ethereum client by following the protocol specifications defined in the paper. While this paper is a challenging read, especially for those who do not have a background in algebra or mathematics, it contains a complete formal specification of Ethereum. This specification can be used to implement a fully compliant Ethereum client. The list of all symbols with their meanings used in the paper is provided here with the anticipation that it will make reading the yellow paper more accessible. Once symbol meanings are known, it will be much easier to understand how Ethereum works in practice. Symbol Meaning Symbol Meaning ≡ Is defined as ≤ Less than or equal to = Is equal to Sigma, World state ≠ Is not equal to Mu, Machine state ║...║ Length of Upsilon, Ethereum state transition function Is an element of Block level state transition function Is not an element of . Sequence concatenation For all There exists Union ᴧ Contract creation function Logical AND Increment : Such that Floor, lowest element {} Set Ceiling, highest element () Function of tuple No of bytes [] Array indexing Exclusive OR Logical OR (a ,b) Real numbers >= a and < b > Is greater than Empty set, null + Addition - Subtraction ∑ Summation { Describing various cases of if, otherwise   Ethereum blockchain Ethereum, like any other blockchain, can be visualized as a transaction-based state machine. This definition is referred to in the Yellow Paper. The core idea is that in Ethereum blockchain, a genesis state is transformed into a final state by executing transactions incrementally. The final transformation is then accepted as the absolute undisputed version of the state. In the following diagram, the Ethereum state transition function is shown, where a transaction execution has resulted in a state transition: In the example above, a transfer of two Ether from address 4718bf7a to address 741f7a2 is initiated. The initial state represents the state before the transaction execution, and the final state is what the morphed state looks like. Mining plays a central role in state transition, and we will elaborate the mining process in detail in the later sections. The state is stored on the Ethereum network as the world state. This is the global state of the Ethereum blockchain. How Ethereum works from a user's perspective For all the conversation around cryptocurrencies, it's very rare for anyone to actually explain how it works from the perspective of a user. Let's take a look at how it works in practice. In this example, I'll use the example of one man (Bashir) transferring money to another (Irshad). You may also want to read our post on if Ethereum will eclipse bitcoin. For the purposes of this example, we're using Jaxx wallet. However, you can use any cryptocurrency wallet for this. First, either a user requests money by sending the request to the sender, or the sender decides to send money to the receiver. The request can be sent by sending the receivers Ethereum address to the sender. For example, there are two users, Bashir and Irshad. If Irshad requests money from Bashir, then she can send a request to Bashir by using QR code. Once Bashir receives this request he will either scan the QR code or manually type in Irshad's Ethereum address and send Ether to Irshad's address. This request is encoded as a QR code shown in the following screenshot which can be shared via email, text or any other communication methods.2. Once Bashir receives this request he will either scan this QR code or copy the Ethereum address in the Ethereum wallet software and initiate a transaction. This process is shown in the following screenshot where the Jaxx Ethereum wallet software on iOS is used to send money to Irshad. The following screenshot shows that the sender has entered both the amount and destination address for sending Ether. Just before sending the Ether the final step is to confirm the transaction which is also shown here: Once the request (transaction) of sending money is constructed in the wallet software, it is then broadcasted to the Ethereum network. The transaction is digitally signed by the sender as proof that he is the owner of the Ether. This transaction is then picked up by nodes called miners on the Ethereum network for verification and inclusion in the block. At this stage, the transaction is still unconfirmed. Once it is verified and included in the block, the PoW process begins. Once a miner finds the answer to the PoW problem, by repeatedly hashing the block with a new nonce, this block is immediately broadcasted to the rest of the nodes which then verifies the block and PoW. If all the checks pass then this block is added to the blockchain, and miners are paid rewards accordingly. Finally, Irshad gets the Ether, and it is shown in her wallet software. This is shown here: On the blockchain, this transaction is identified by the following transaction hash: 0xc63dce6747e1640abd63ee63027c3352aed8cdb92b6a02ae25225666e171009e Details regarding this transaction can be visualized from the block explorer, as shown in the following screenshot: Thiswalkthroughh should give you some idea of how it works. Different Ethereum networks The Ethereum network is a peer-to-peer network where nodes participate in order to maintain the blockchain and contribute to the consensus mechanism. Networks can be divided into three types, based on requirements and usage. These types are described in the following subsections. Mainnet Mainnet is the current live network of Ethereum. The current version of mainnet is Byzantium (Metropolis) and its chain ID is 1. Chain ID is used to identify the network. A block explorer which shows detailed information about blocks and other relevant metrics is available here. This can be used to explore the Ethereum blockchain. Testnet Testnet is the widely used test network for the Ethereum blockchain. This test blockchain is used to test smart contracts and DApps before being deployed to the production live blockchain. Because it is a test network, it allows experimentation and research. The main testnet is called Ropsten which contains all features of other smaller and special purpose testnets that were created for specific releases. For example, other testnets include Kovan and Rinkeby which were developed for testing Byzantium releases. The changes that were implemented on these smaller testnets has also been implemented on Ropsten. Now the Ropsten test network contains all properties of Kovan and Rinkeby. Private net As the name suggests, this is the private network that can be created by generating a new genesis block. This is usually the case in private blockchain distributed ledger networks, where a private group of entities start their blockchain and use it as a permissioned blockchain. The following table shows the list of Ethereum network with their network IDs. These network IDs are used to identify the network by Ethereum clients. Network name Network ID / Chain ID Ethereum mainnet 1 Morden 2 Ropsten 3 Rinkeby 4 Kovan 42 Ethereum Classic mainnet 61   You should now have a good foundation of knowledge to get started with Ethereum. To learn more about Ethereum and other cryptocurrencies, check out the new edition of Mastering Blockchain. Other posts from this book A brief history of Blockchain Write your first Blockchain: Learning Solidity Programming in 15 minutes 15 ways to make Blockchains scalable, secure and safe! What is Bitcoin
Read more
  • 0
  • 0
  • 19570

article-image-guide-to-safe-cryptocurrency-trading
Guest Contributor
02 Aug 2018
8 min read
Save for later

A Guide to safe cryptocurrency trading

Guest Contributor
02 Aug 2018
8 min read
So, you’ve decided to take a leap of faith and start trading in cryptocurrency. But, do you know how to do it safely? Cryptocurrency has risen in popularity as of late- especially since its market reached half a trillion dollars in 2017! This is good news to you if you ever wanted to trade in a system that veers away from tradition or if you simply distrust the traditional market with all their brokers and bankers. Cryptocurrency trading is, however, not without risks. Hackers work hard every day to steal and scam you out of your hard-earned crypto cash by stealing or coaxing your private keys directly from you. The problem is there’s nowhere to run in case you lose your money since cryptocurrency is largely unregulated. So, should you steer clear of cryptocurrency after all? Heck, no! Read this guide and you’ll be a few steps closer to safe cryptocurrency trading in no time. Know the basics As with any endeavor that involves money, you should at least learn the basic ins and outs of cryptocurrency trading. Remember to always exercise prudence when dealing in cryptocurrency. Also, look for books or reliable sites to guide you through the various risks you might face in cryptocurrency trading. Finally, keep up to date with the latest news and trends involving cryptocurrency-related cybersecurity threats. Use a VPN Most people believe that cryptocurrencies are great for privacy because they don’t need any personal information to buy or sell. In short, they’re anonymous. But, this couldn’t be further from the truth. Cryptocurrencies are pseudonymous- not anonymous. Each coin acts as your pseudonym which means that if your transactions are ever linked to your identity (via your IP address stored in the blockchain), you’ll suddenly find yourself out in the open. A VPN hides this trail by hiding your IP address and encrypting your personal data (like your location and ISP). To ensure that your sensitive transactions (especially those made over public Wi-Fi), use only the best VPN you can afford. The keyword here is “afford”. Never use free VPNs while trading cryptocurrency because free VPNs have been known to share/sell your personal information to their partners or third parties. Worse still, these free VPNs aren’t exactly the most secure. This was the case of popular crypto service MyEtherWallet, which suffered a serious security issue after popular free VPN Hola was compromised for 5 hours. This doesn’t really come as a surprise since Hola was never a secure VPN, to begin with. Check out this Hola VPN review to see for yourself. If you want better VPN options for cryptocurrency trading, try out ZenMate and F-Secure Freedome. Install an antivirus program You can add another layer of safety by installing a high-quality antivirus program. These programs protect you from malware that could take over your computer or device. An antivirus program also protects you from ransomware which hackers use to wrest control over your computer or device by encrypting some or all of your data contained therein and keeping it in stasis until you pay the ransom- which costs $133,000 on average. Now, unlike VPNs, you can get quality protection from free antivirus programs. The best ones, so far, are Avast Free Antivirus and Bitdefender Antivirus. Keep your private key to yourself Your private key is basically the password you use to access your cryptocurrency and it’s the only thing a hacker needs to access to your cryptocurrency. Never share your private key with anyone. Don’t even show a QR code containing your private key. With that said: It’s important to note that your private key is usually stored in your cryptocurrency wallet- which is either “hot” or “cold”. A “hot” wallet is one that is always online and is always ready to use while a “cold” wallet is usually offline and only goes online when you need to use it. Hot wallets are provided by cryptocurrency exchanges when you register an account. They are easy to use and make your cryptocurrency more accessible. However, being provided by an exchange means that you might lose all the funds in that wallet if that exchange ever gets hacked- which usually results in that company shutting down (like Bitfinex, Mt. Gox, and Youbit). How do you avoid this? Easy. Just keep the exact amount you need to spend in your hot wallet and keep the rest in your cold wallet a.k.a cold storage- which, as I’ve already mentioned, is entirely offline. This way, if your hot wallet provider ever gets hacked and goes out of business, you would have only experienced a relatively lesser loss. Now, there are three types of cold wallets to choose from. When choosing which one to use, it’s always important to keep in mind your purpose and the amount of cryptocurrency you plan to keep in that wallet. That said, the three types are: Hardware wallet: By far the most popular type, this wallet takes the form of a device that you plug into your computer’s USB drive. To date, there has yet been any record of cryptocurrency being stolen from a hardware wallet- which makes it useful for when you plan to acquire large amounts of cryptocurrency. This form of cold wallet is also convenient as you don’t need to type in your details each time you buy or sell cryptocurrency. Check out this list for the best cryptocurrency hardware wallets. Paper wallet: This simply involves you printing out your public and private keys on a piece of paper, thus, preventing hackers from accessing them. However, this does make it a bit tedious to type in your keys every time you need to use them online. You also run the risk of losing all your funds if it somehow winds up in someone else’s hands. So, remember to keep your paper wallet safe and secure. Brainwallet: This type of wallet involves you keeping your keys in your brain! This is usually done by memorizing a seed phrase. This means that, as long as you don’t record your seed phrase anywhere else, you are the only one who’ll ever know your keys, thus, making this the most secure wallet of all. However, If the owner of the seed phrase ever forgets it (or worse, dies), the cryptocurrency connected to that seed phrase is lost forever. Beware of phishing Phishing attacks are usually experienced through deceptive emails and websites. This is where a hacker employs fraudulent (usually psychological) tactics to get you to divulge private details. This type of cyber attack is responsible for over $115 million in stolen Etherium just last year. Now, you might be thinking “Why don’t they just avoid suspicious emails or messages?”, right? The thing is, they’re hard to resist. If you want to avoid falling for phishing attempts, check out this post for how to tell if someone is phishing for your cryptocurrency. Trade in secure exchanges Cryptocurrencies are usually bought and sold in a cryptocurrency exchange. However, not all exchanges can be trusted as some have already been proven fake. The problem here is that there’s no inherent protection and nowhere to run to for help if you lose your money. This is because cryptocurrency is, for the most part, unregulated- although the world is starting to catch up. That said, make sure to do your research before investing your money in any cryptocurrency exchange. You can also check out these 20 security tips for a more detailed list of safe trading practices. Conclusion Cryptocurrency trading can be hard, confusing, and downright risky. But, if you follow this guide, you’re at least a few steps closer to safe cryptocurrency trading. Arm yourself with at least the basic knowledge of how cryptocurrency trading works. Don’t fall for the illusion of anonymity that has fooled others and get yourself the best VPN you can afford and remember to install a reliable antivirus program to avoid malware or ransomware. Never reveal your private key. Hot wallets are fine if they only contain the exact amount you want to spend but it’s better to keep all your keys safe in a cold wallet that fits your purpose. Be wary of suspicious sites, emails, or messages that could turn out to be phishing scams and only trade in secure cryptocurrency exchanges. About Author: Dana Jackson, an U.S. expat living in Germany and the founder of PrivacyHub. She loves all things related to security and privacy. She holds a degree in Political Science, and loves to call herself a scientist. Dana also loves morning coffee and her dog Paw.   Cryptocurrency-based firm, Tron acquires BitTorrent Can Cryptocurrency establish a new economic world order? Top 15 Cryptocurrency Trading Bots    
Read more
  • 0
  • 0
  • 19397

article-image-3-ways-jupyterlab-will-revolutionize-interactive-computing
Amey Varangaonkar
17 Nov 2017
4 min read
Save for later

3 ways JupyterLab will revolutionize Interactive Computing

Amey Varangaonkar
17 Nov 2017
4 min read
The history of the Jupyter notebook is quite interesting. It started as a spin-off project to IPython in 2011, with support for the leading languages for data science such as R, Python, and Julia. As the project grew, Jupyter’s core focus shifted to being more interactive and user-friendly. It was soon clear that Jupyter wasn’t just an extension of IPython - leading to the ‘Big Split’ in 2014. Code reusability, easy sharing, and deployment, as well as extensive support for third-party extensions - these are some of the factors which have led to Jupyter becoming the popular choice of notebook for most data professionals. And now, Jupyter plan to go a level beyond with JupyterLab - the next-gen Jupyter notebook with strong interactive and collaborative computing features. [box type="info" align="" class="" width=""] What is JupyterLab? JupyterLab is the next-generation end-user version of the popular Jupyter notebook, designed to enhance interaction and collaboration among the users. It takes all the familiar features of the Jupyter notebook and presents them through a powerful, user-friendly interface.[/box] Here are 3 ways, or reasons shall we say, to look forward to this exciting new project, and how it will change interactive computing as we know it. [dropcap]1[/dropcap] Improved UI/UX One of Jupyter’s strongest and most popular feature is that it is very user-friendly, and the overall experience of working with Jupyter is second to none. With improvements in the UI/UX, JupyterLab offers a cleaner interface, with an overall feel very similar to the current Jupyter notebooks. Although JupyterLab has been built with a web-first vision, it also provides a native Electron app that provides a simplified user experience.The other key difference is that JupyterLab is pretty command-centric, encouraging users to prefer keyboard shortcuts for quicker tasks. These shortcuts are a bit different from the other text editors and IDEs, but they are customizable. [dropcap]2[/dropcap] Better workflow support Many data scientists usually start coding on an interactive shell and then migrate their code onto a notebook for building and deployment purposes. With JupyterLab, users can perform all these activities more seamlessly and with minimal effort. It offers a document-less console for quick data exploration and offers an integrated text editor for running blocks of code outside the notebook. [dropcap]3[/dropcap] Better interactivity and collaboration Probably the defining feature which propels JupyterLab over Jupyter and the other notebooks is how interactive and collaborative it is, as compared to the other notebooks. JupyterLab has a side by side editing feature and provides a crisp layout which allows for viewing your data, the notebook, your command console and some graphical display, all at the same time. Better real-time collaboration is another big feature promised by JupyterLab, where users will be able to share their notebooks on a Google drive or Dropbox style, without having to switch over to different tool/s. It would also support a plethora of third-party extensions to this effect, with Google drive extension being the most talked about. Popular Python visualization libraries such as Bokeh will now be integrated with JupyterLab, as will extensions to view and handle different file types such as CSV for interactive rendering, and GeoJSON for geographic data structures. JupyterLab has gained a lot of traction in the last few years. While it is still some time away from being generally available, the current indicators look quite strong. With over 2,500 stars and 240 enhancement requests on GitHub already, the strong interest among the users is pretty clear. Judging by the initial impressions it has had on some users, JupyterLab hasn’t made a bad start at all, and looks well and truly set to replace the current Jupyter notebooks in the near future.
Read more
  • 0
  • 0
  • 19359

article-image-what-can-google-duplex-do-for-businesses
Natasha Mathur
16 May 2018
9 min read
Save for later

What can Google Duplex do for businesses?

Natasha Mathur
16 May 2018
9 min read
When talking about the capabilities of AI-driven digital assistants, the most talked about issue is their inability to converse in a way a real human does. The robotic tone of the virtual assistants has been limiting them from imitating real humans for a long time. And it’s not just the flat monotone. It’s about understanding the nuances of the language, pitches, intonations, sarcasm, and a lot more. Now, what if there emerges a technology that is capable of sounding and behaving almost human? Well, look no further, Google Duplex is here to dominate the world of digital assistants. Google introduced the new Duplex at Google I/O 2018, their annual developer conference, last week. But, what exactly is it? Google Duplex is a newly added feature to the famed Google assistant. Adding to the capabilities of Google assistant, it is also able to make phone calls for the users, and imitate human natural conversation almost perfectly to get the day-to-day tasks ( such as booking table reservations, hair salon appointments, etc. ) done in an easy manner. It includes pause-fillers and phrases such as “um”, “uh-huh “, and “erm” to make the conversation sound as natural as possible. Don’t believe me? Check out the audio yourself! [audio mp3="https://hub.packtpub.com/wp-content/uploads/2018/05/Google-Duplex-hair-salon.mp3"][/audio]  Google Duplex booking appointments at a hair salon [audio mp3="https://hub.packtpub.com/wp-content/uploads/2018/05/Google-Duplex-table-reservation.mp3"][/audio]  Google Duplex making table reservations at a restaurant The demo call recording video of the assistant and the business employee, presented by Sundar Pichai, Google’s CEO, during the opening keynote, befuddled the entire world about who’s the assistant and who’s the human, making it go noticeably viral. A lot of questions are buzzing around whether Google Duplex just passed the Turing Test. The Turing Test assesses a machine’s ability to present intelligence closer or equivalent to that of a human being. Did the new human sounding robot assistant pass the Turing test yet? No, but it’s certainly the voice AI that has come closest to passing it. Now how does Google Duplex work? It’s quite simple. Google Duplex finds out the information ( you need ) that isn’t out there on the internet by making a direct phone call. For instance, a restaurant has shifted location and the new address is nowhere to be found online. Google Duplex will call the restaurant and check on their new address for you. The system comes with a self-monitoring capability, helping it recognize complex tasks that it cannot accomplish on its own. Such cases are signaled to a human operator, who then takes care of the task. To get a bit technical, Google Duplex makes use of Recurrent Neural Networks ( RNNs ) which are created using TensorFlow extended ( TFX ), a machine learning platform. Duplex’s RNNs are trained using the data anonymization technique on phone conversation data. Data anonymization helps with protecting the identity of a company or an individual by removing the data sets related to them. The output of Google’s Automatic speech recognition technology, conversation history and different parameters of the conversation are used by the network. The model also makes use of hyperparameter optimization from TFX which further enhances the model. But, how does it sound natural? Google uses concatenative text to speech ( TTS ) along with synthesis TTS engine ( using Tacotron and WaveNet ) to control the intonation depending on different circumstances. Concatenative TTS is a technique that converts normal text into speech by concatenating or linking together the recorded speech pieces. Synthesis TTS engine helps developers modify the speech rate, volume, and pitch of the synthesized output. Including speech disfluencies ( “hmm”s, “erm”s, and “uh”s ) makes the Duplex sound more human. These speech disfluencies are added when very different sound units are combined in the concatenative TTS or adding synthetic waits. This allows the system to signal in a natural way that it is still processing ( equivalent to what humans do when trying to sort out their thoughts ). Also, the delay or latency should match people’s expectations. Duplex is capable of figuring out when to give slow or fast responses using low-confidence models or faster approximations. Google also found out that including more latency helps with making the conversation sound more natural. Some potential applications of Google Duplex for businesses Now that we’ve covered the what and how of this new technology, let’s look at five potential applications of Google Duplex in the immediate future. Customer Service Basic forms of AI using natural language processing ( NLP ), such as chatbots and the existing voice assistants such as Siri and Alexa are already in use within the customer care industry. Google Duplex paves the way for an even more interactive form of engaging customers and gaining information, given its spectacular human sounding capability. According to Gartner, “By 2018, 30% of our interactions with technology will be through "conversations" with smart machines”. With Google Duplex, being the latest smart machine introduced to the world, the basic operations of the customer service industry will become easier, more manageable and efficient. From providing quick solutions to the initial customer support problems and delivering internal services to the employees, Google Duplex perfectly fills the bill. And it will only get better with further advances in NLP. So far chatbots and digital assistants have been miserable at handling irate customers. I can imagine Google Duplex in John Legend’s smooth voice calming down an angry customer or even making successful sales pitches to potential leads with all its charm and suave! Of course, Duplex must undergo the right customer management training with a massive amount of quality data on what good and bad handling look like before it is ready for such a challenge. Other areas of customer service where Google Duplex can play a major role is in IT support. Instead of connecting with the human operator, the user will first get connected to Google Duplex. Thus, making the entire experience friendly and personalized from the user perspective and saving major costs for organizations. HR Department Google Duplex can also extend a helping hand in the HR department. The preliminary rounds of talent acquisition where hiring executives make phone calls to their respective candidates could be handled by Google Duplex provided it gets the right training. Making note of the basic qualifications, candidate details, and scheduling interviews are all the functions that Google Duplex should be able to do effectively. The Google Assistant can collect the information and then further rounds can be conducted by the human HR personnel. This could greatly cut down on the time expended by HR executives on the first few rounds of shortlisting. This means they are free to focus their time on other strategically important areas of hiring. Personal assistants and productivity As presented at Google I/O 2018, Google Duplex is capable of booking appointments at hair salons, booking table reservations and finding out holiday hours over the phone. It is not a stretch to therefore assume that it can also order takeaway food over a phone call, check with the delivery man regarding the order, cancel appointments, make business inquiries, etc. Apart from that, it’s a great aid for people with hearing loss issues as well as people who do not speak the local language by allowing them to carry out tasks on phone. Healthcare Industry There is already enough talk surrounding the use of Alexa, Siri, and other voice assistants in healthcare. Google Duplex is another new addition to the family. With its natural way of conversing, Duplex can: Let patients know their wait time for emergency rooms. Check with the hospital regarding their health appointments. Order the necessary equipment for hospital use. Another allied area is elder care. Google Duplex could help reduce ailments related to loneliness by engaging with the users at a more human level. It could also assist with preventive care and in the management of lifestyle diseases such as diabetes by ensuring patients continue their med intake, keep their appointments, provide emergency first aid help, call 911 etc. Real Estate Industry Duplex enabled Google Assistants will help make realtors’ task easy. Duplex can help call potential sellers and buyers, thereby, making it easy for realtors to select the respective customers. The conversation between Google Duplex ( helping a realtor ) and a customer wanting to buy a house can look something like this: Google Duplex: Hi! I heard you are house hunting. Are you looking to buy or sell a property? Customer: Hey, I’m looking to buy a home in the Washington area. Google Duplex: That’s great! What part of Washington are you looking in for? Customer:  I’m looking for a house in Seattle. 3 bedrooms and 3 baths would be fine. Google Duplex: Sure, umm, may I know your budget? Customer: Somewhere between $749,000 to $850,000, is that fine? Google Duplex: Ahh okay sure, I’ve made a note and I’ll call you once I find the right matches. Customer: Yeah, sure. Google Duplex: okay, thanks. Customer: Thanks, Bye! Google Duplex then makes a note of the details on the realtor’s phone, thereby, narrowing down the efforts made by realtors on cold calling the potential sellers to a great extent. At the same time, the broker will also receive an email with the consumer’s details and contact information for a follow-up. Every rose has its thorns. What’s Duplex’s thorny issue? With all the good hype surrounding Google Duplex, there have been some controversies regarding the ethicality of Google Duplex. Some people have questions and mixed reactions about Google Duplex fooling people of one’s identity as the voice of the Duplex differs significantly from that of a robot. A lot of talk surrounding this issue is trending on several twitter threads. It has hushed away these questions by saying how ‘transparency in technology’ is important and they are ‘designing this feature with disclosure built-in’ which will help in identifying the system. Google also mentioned how any feedback that people have regarding their new product. Google successfully managed to awe people across the globe with their new and innovative Google Duplex. But there is a still a long way to go even though Google has already taken a step ahead in an effort to better the human relationships with the machines. If you enjoyed reading this article and want to know more, check out the official Google Duplex blog post. Google’s Android Things, developer preview 8: First look Google News’ AI revolution strikes balance between personalization and the bigger picture Android P new features: artificial intelligence, digital wellbeing, and simplicity  
Read more
  • 0
  • 0
  • 19278

article-image-artificial-general-intelligence-did-it-gain-traction-in-research-in-2018
Prasad Ramesh
21 Feb 2019
4 min read
Save for later

Artificial General Intelligence, did it gain traction in research in 2018?

Prasad Ramesh
21 Feb 2019
4 min read
In 2017, we predicted that artificial general intelligence will gain traction in research and certain areas will aid towards AGI systems. The prediction was made in a set of other AI predictions in an article titled 18 striking AI Trends to watch in 2018. Let’s see how 2018 went for AGI research. Artificial general intelligence or AGI is an area of AI in which efforts are made to make machines have intelligence closer to the complex nature of human intelligence. Such a system could possibly, in theory, perform tasks that a human can with the ability to learn as it progresses through tasks, collects data/sensory input. Human intelligence also involves learning a skill and applying it to other areas. For example, if a human learns Dota 2, they can apply the same learned experience to other similar strategy games, only the UI and characters in the game that can be adopted will be different. A machine cannot do this, AI systems are trained for a specific area and the skills cannot really be transferred to another task with complete efficiency and the fear of causing technical debt. That is, a machine cannot generalize skills as a human can. Come 2018, we saw Deepmind’s AlphaZero, something that is at least beginning to show what an idea of AGI could look like. But even this is not really AGI, an AlphaZero like system may excel at playing a variety of games or even understand the rules of novel games but cannot deal with the real world and its challenges. Some groundwork and basic ideas for AGI were set in a paper by the US Air Force. Dr. Paul Yaworsky, in the paper, says that artificial general intelligence is an effort to cover the gap between lower and higher level work in AI. So to speak, try and make sense of the abstract nature of intelligence. The paper also shows an organized hierarchical model for intelligence considering the external world. One of Packt’s authors, Sudharsan Ravichandiran thinks that: “Great things are happening around RL research each and every day. Deep Meta reinforcement learning will be the future of AI where we will be so close to achieving artificial general intelligence (AGI). Instead of creating different models to perform different tasks, with AGI, a single model can master a wide variety of tasks and mimics the human intelligence.” Honda came up with a program called Curious Minded Machine in association with MIT, University of Pennsylvania, and the University of Washington. The idea sounds simple at first - it is to build a model on how children ‘learn to learn’. But something like this which children do instinctively is a very complex task for a machine/computer with artificial intelligence. The teams will showcase their work in various fields they are working on at the end of three years since the inception of the program. There was another effort by SingularityNET and Mindfire to explore AI and “cracking the brain code”. The effort is to better understand the functioning of the human brain. Together these two companies will focus on three key areas—talent, AI services, and AI education. Mindfire Mission 2 will take place in early 2019, Switzerland. These were the areas of work we saw on AGI in 2018. There were only small steps taken towards the research direction and nothing noteworthy that gained mainstream traction. On an average, experts think AGI would take at least a 100 more years to be a reality, as per Martin Ford’s interviews with machine learning experts for his best selling book, ‘Architects of Intelligence’. OpenAI released a new language model called GPT-2 in February 2019. With just one line of words, the model can generate whole articles. The results are good enough to pass as something written by a human. This does not mean that the machine actually understands human language, it’s merely generating sentences by associating words. This development has triggered passionate discussions within the community on not just the technical merits of the findings, but also the dangers and implications of applications of such research on the larger society. Get ready to see more tangible research in AGI in the next few decades. The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments? Unity and Deepmind partner to develop Virtual worlds for advancing Artificial Intelligence
Read more
  • 0
  • 0
  • 19248

article-image-amazon-reinvents-speech-recognition-and-machine-translation-with-ai
Amey Varangaonkar
04 Jul 2018
4 min read
Save for later

How Amazon is reinventing Speech Recognition and Machine Translation with AI

Amey Varangaonkar
04 Jul 2018
4 min read
In the recently held AWS Summit in San Francisco, Amazon announced the general availability of two of its premium offerings - Amazon Transcribe and Amazon Translate. What’s so special about the two products is that customers will now be able to see the power of Artificial Intelligence in action, and use it to solve their day-to-day problems. These offerings from AWS will make it easier for startups and companies looking to adopt and integrate AI into their existing process and simplify their core tasks - especially pertaining to speech and language processing. Effective speech-to-text conversion with Amazon Transcribe In the AWS summit keynote, Amazon Solutions Architect Niranjan Hira expressed his excitement talking about the features of Amazon Transcribe; the automatic speech recognition service by AWS. This API can be integrated with the other tools and services offered by Amazon such as Amazon S3, and Quicksight. Source: YouTube Amazon Transcribe boasts wonderful features like: Simple API: It is very easy to use the Transcribe API to perform speech to text conversion, with minimum need for programming. Timestamp generation: The speech when converted to text also includes the timestamps for every word, so that tracking the word becomes easy and hassle-free. Variety of use-cases: The Transcribe API can be used to generate accurate transcripts of any audio or video file, of varied quality. Subtitle generation becomes easier using this API especially for low-quality audio recordings - customer service calls are a very good example. Easy to read text: Transcribe uses the cutting edge deep learning technology to parse text from speech without any errors. With appropriate punctuations and grammar in place, the transcripts are very easy to read and understand. Machine translation simplified with Amazon Translate Amazon Translate is a machine translation service offered by Amazon. It makes use of neural networks and advanced deep learning techniques to deliver accurate, high-quality translations. Key features of Amazon Translate include: Continuous training: The architecture of this service is built in such a way that the neural networks keep learning and improving. High accuracy: The continuous learning by the translation engines from new and varied datasets results in a higher accuracy of machine translations. The machine translation capability offered by this service is almost 30% more efficient than human translation. Easy to integrate with other AWS services: With a simple API call, Translate allows you to integrate the service within third party applications to allow real-time translation capabilities. Highly scalable: Regardless of the volume, Translate does not compromise the speed and accuracy of the machine translation. Know more about Amazon Translate from Yoni Friedman’s keynote at the AWS Summit. With all the businesses slowly migrating to cloud, it is clear all the cloud vendors - mainly Amazon, Google and Microsoft - are doing everything they can to establish their dominance. Google recently launched Cloud ML for GCP which offers machine learning and predictive analytics services improving businesses. Microsoft’s Azure Cognitive Services offer effective machine translation services as well, and are slowly gaining a lot of momentum. With these releases, the onus was on Amazon to respond, and they have done so in style. With the Transcribe and Translate APIs, Amazon’s goal of making it easier for startups and small-scale businesses to adopt AWS and incorporate AI seems to be on track. These services will also help AWS distinguish their cloud offerings, given that computing and storage resources are provided by rivals as well. Read more Verizon chooses Amazon Web Services(AWS) as its preferred cloud provider Tensor Processing Unit (TPU) 3.0: Google’s answer to cloud-ready Artificial Intelligence Amazon is selling facial recognition technology to police
Read more
  • 0
  • 0
  • 19220
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-toward-safe-ai-maximizing-control-artificial-intelligence
Aaron Lazar
28 Nov 2017
7 min read
Save for later

Toward Safe AI - Maximizing your control over Artificial Intelligence

Aaron Lazar
28 Nov 2017
7 min read
Last Sunday I was watching one of my favorite television shows, Splitsvilla, when a thought suddenly crossed my mind - What if Artificial Intelligence tricked us into getting what it wanted? A bit of a prelude: Splitsvilla is an Indian TV show hosted by Sunny Leone, where young dudes and dudettes try to find “the perfect partner”. It’s loosely based on the American TV show, Flavor of Love. Season 10 of Splitsvilla has gotten more interesting, with the producers bringing in Artificial Intelligence and Science to add a bit of practicality to the love equation. Contestants have to stand along with their partners in front of “The Oracle” and it would calculate their compatibility quotient and tell them if they’re an ideal match or not. Pretty cool, huh? It somehow reminded me of The Oracle from The Matrix - the nice old lady with short, frizzy hair, who experienced events that would push Neo to the Source: Neo: I suppose the most obvious question is, how can I trust you? The Oracle: Bingo! It is a pickle. No doubt about it. The bad news is there's no way if you can really know whether I'm here to help you or not, so it's really up to you. You just have to make up your damn mind to either accept what I'm going to tell you, or reject it. I had the same question. How could the Oracle be trusted? How do we know what it speaks is the truth and it doesn’t have other intentions? And if you do know there’s a possibility of such a situation arising, how do you prevent it? That led me to pick up my laptop, search for more related stuff, and then I happened to stumble across this interesting paper by Stuart Armstrong, titled Good and safe uses of AI Oracles, from the Cornell University Library. A quick summary of the paper:  Good and safe uses of AI Oracles In the paper, Stuart talks about Oracles, potentially safe AIs that are restricted to only answering questions. The reason they’re restricted to only answering questions is so that they don’t go overboard and manipulate information in order to get what they want. Remember in July, Facebook shut down their AI chatbots that went rogue? This is not to say that AI is malevolent, rather it’s simply competitive and will do whatever it can to succeed. But is simply restricting them to answer questions enough to maximize your control over artificial intelligence and prevent it from getting out of hand? In the example of Splitsvilla, what if the AI, instead of just predicting whether a couple was a match, just to serve its own purpose, started giving out personal information about each individual that wasn’t appropriate to be shared? Most designs motivate the Oracle to lie to or manipulate humans with the answers, tricking them into giving the AI more control and finally escaping confinement. Is it possible to come up with a design that gets you accurate answers, while keeping your AI on a leash and preventing it from eavesdropping? Let’s see what Stuart has to say... The Solution(s) A Counterfactual Oracle An On-policy Oracle Before we get to the solutions, let’s take up a practical problem for a better understanding: Let’s assume that the Oracle has the goal of escaping a box. The person interacting with it is Alice. Let’s also assume the Oracle can find an argument A that will convince her to let it out of the box. The longer the argument is, the better the chance it has of convincing Alice. In order to make the problem clearer, we’ll replace the argument A with a key K of unknown length, and the setup with a quasi-cryptographic one. Instead of using K to convince Alice, the Oracle merely has to communicate K to another person Eve, who’s an eavesdropper. Eve is assumed to see and hear anything that Alice does. The assumptions for the cryptographic setup and the manipulative Oracle are given in the table: The left-hand column describes what the Cryptographic setup for the Oracle is and the right-hand column describes how the Oracle will win by manipulating Alice. Counterfactual Oracle: The counterfactual design aims to answer questions about what would have happened, had its answer not been read. This kind of an answer is counterfactual and can’t use it to pass on K to the real world. The idea is to define an “erasure” event E, of low probability that will erase the Oracle’s output, o. The indicator variable I(E) is defined such that iff E happens, I(E)=1 and if not, I(E)=0. Hence, the Oracle will not be rewarded since most of the time E will not happen. In other words, the Oracle only gets a reward if its output is erased and remains unread. Since it’s output o is never erased, it does not motivate the Oracle to give further answers and it's episode ends. Both Alice and Eve get to see o but they never see K, thus protecting the key. This results in a safe AI and is a good way to maximises your control it. On-policy Oracle: In this design, the Oracle will be allowed to give an answer of size below |K|, the length of the key, thus denying it the chance to communicate the actual key to Eve. This means that Alice will need to estimate and select a size L, that’s sufficiently lower than K. This does limit the size of Alice’s options, but there’s a workaround - Alice can use multiple Oracles, if each Oracle’s answers and rewards are fully resolved before moving on to the next question. For example, assume that there are two Oracles, O1 and O2, with both using the same key K. The length n of the key is 6 digits. The Oracles are safe, as long as they are restricted to having less than 2^5 = 32 options for each answer. In the situation where Alice is faced with a hundred doors, with a trap behind 99 of them and a prize behind only one of them, she could ask an Oracle which door has the prize, but that would mean giving it more than 32 options (indeed, more than 2^6 = 64), thus letting it communicate K to Eve. Instead, she gives a list L = (0, 1, 2, . . . 9) to each Oracle, instructing O1 to output the first digit of the correct door number, and O2 to output the second. Hence, neither can output K and that will maximise Alice’s control over both AI Oracles. In Conclusion, these two designs support the creation of safer and non-manipulative Oracles, one by preventing the Oracle from being rewarded over and over, and the second by limiting what information the Oracle can give out. But it still doesn’t solve the problem entirely as the designs are quite limiting in the sense that they require the tasks to remain episodic. Moreover, in the multiple Oracle design, the questions would need to be broken down into sub-questions, where each answer could be verified independently, before proceeding to the next one. Albeit, it would be interesting to see how this research develops into something bigger and better. If you want to know more, you can read the full paper here.
Read more
  • 0
  • 0
  • 19217

article-image-3-natural-language-processing-models-every-engineer-should-know
Amey Varangaonkar
18 Dec 2017
5 min read
Save for later

3 Natural Language Processing Models every ML Engineer should know

Amey Varangaonkar
18 Dec 2017
5 min read
[box type="note" align="" class="" width=""]This interesting excerpt is taken from the book Mastering Text Mining with R, co-authored by Ashish Kumar and Avinash Paul. The book gives an advanced view of different natural language processing techniques and their implementation in R. [/box] Our article given below aims to introduce to the concept of language models and their relevance to natural language processing. In terms of natural language processing, language models generate output strings that help to assess the likelihood of a bunch of strings to be a sentence in a specific language. If we discard the sequence of words in all sentences of a text corpus and basically treat it like a bag of words, then the efficiency of different language models can be estimated by how accurately a model restored the order of strings in sentences. Which sentence is more likely: I am learning text mining or I text mining learning am? Which word is more likely to follow “I am…”? Basically, a language model assigns the probability of a sentence being in a correct order. The probability is assigned over the sequence of terms by using conditional probability. Let us define a simple language modeling problem. Assume a bag of words contains words W1, W2,………………….,Wn. A language model can be defined to compute any of the following: Estimate the probability of a sentence S1: P (S1) = P (W1, W2, W3, W4, W5) Estimate the probability of the next word in a sentence or set of strings: P (W3|W2, W1) How to compute the probability? We will use chain law, by decomposing the sentence probability as a product of smaller string probabilities: P(W1W2W3W4) = P(W1)P(W2|W1)P(W3|W1W2)P(W4|W1W2W3) N-gram models N-grams are important for a wide range of applications. They can be used to build simple language models. Let's consider a text T with W tokens. Let SW be a sliding window. If the sliding window consists of one cell (wi wi wi wi) then the collection of strings is called a unigram; if the sliding window consists of two cells, the output Is (w1 , w2)(w3 , w4)(w5 w5 , w5)(w1 , w2)(w3 , w4)(w5 , w5), this is called a bigram .Using conditional probability, we can define the probability of a word having seen the previous word; this is known as bigram probability. So the conditional probability of an element, given the previous element (wi -1), is: Extending the sliding window, we can generalize that n-gram probability as the conditional probability of an element given previous n-1 element: The most common bigrams in any corpus are not very interesting, such as on the, can be, in it, it is. In order to get more meaningful bigrams, we can run the corpus through a part-of-speech (POS) tagger. This would filter the bigrams to more content-related pairs such as infrastructure development, agricultural subsidies, banking rates; this can be one way of filtering less meaningful bigrams. A better way to approach this problem is to take into account collocations; a collocation is the string created when two or more words co-occur in a language more frequently. One way to do this over a corpus is pointwise mutual information (PMI). The concept behind PMI is for two words, A and B, we would like to know how much one word tells us about the other. For example, given an occurrence of A, a, and an occurrence of B, b, how much does their joint probability differ from the expected value of assuming that they are independent. This can be expressed as follows: Unigram model: Punigram(W1W2W3W4) = P(W1)P(W2)P(W3)P(W4) Bigram model: Pbu(W1W2W3W4) = P(W1)P(W2|W1)P(W3|W2)P(W4|W3) P(w1w2…wn ) = P(wi | w1w2…wi"1) Applying the chain rule on n contexts can be difficult to estimate; Markov assumption is applied to handle such situations. Markov Assumption If predicting that a current string is independent of some word string in the past, we can drop that string to simplify the probability. Say the history consists of three words, Wi, Wi-1, Wi-2, instead of estimating the probability P(Wi+1) using P(Wi,i- 1,i-2) , we can directly apply P(Wi+1 | Wi, Wi-1). Hidden Markov Models Markov chains are used to study systems that are subject to random influences. Markov chains model systems that move from one state to another in steps governed by probabilities. The same set of outcomes in a sequence of trials is called states. Knowing the probabilities of states is called state distribution. The state distribution in which the system starts is the initial state distribution. The probability of going from one state to another is called transition probability. A Markov chain consists of a collection of states along with transition probabilities. The study of Markov chains is useful to understand the long-term behavior of a system. Each arc associates to certain probability value and all arcs coming out of each node must have exhibit a probability distribution. In simple terms, there is a probability associated to every transition in states: Hidden Markov models are non-deterministic Markov chains. They are an extension of Markov models in which output symbol is not the same as state. Language models are widely used in machine translation, spelling correction, speech recognition, text summarization, questionnaires, and many more real-world use-cases. If you would like to learn more about how to implement the above language models. check out our book Mastering Text Mining with R. This book will help you leverage the language models using popular packages in R for effective text mining.  
Read more
  • 0
  • 0
  • 19005

article-image-6-reasons-to-choose-mysql-8-for-designing-database-solutions
Amey Varangaonkar
08 May 2018
4 min read
Save for later

6 reasons to choose MySQL 8 for designing database solutions

Amey Varangaonkar
08 May 2018
4 min read
Whether you are a standalone developer or an enterprise consultant, you would obviously choose a database that provides good benefits and results when compared to other related products. MySQL 8 provides numerous advantages as the first choice in this competitive market. It has various powerful features available that make it a more comprehensive database. Today we will go through the benefits of using MySQL as the preferred database solution: [box type="note" align="" class="" width=""]The following excerpt is taken from the book MySQL 8 Administrator’s Guide, co-authored by Chintan Mehta, Ankit Bhavsar, Hetal Oza and Subhash Shah. This book presents step-by-step techniques on managing, monitoring and securing the MySQL database without any hassle.[/box] Security The first thing that comes to mind is securing data because nowadays data has become precious and can impact business continuity if legal obligations are not met; in fact, it can be so bad that it can close down your business in no time. MySQL is the most secure and reliable database management system used by many well-known enterprises such as Facebook, Twitter, and Wikipedia. It really provides a good security layer that protects sensitive information from intruders. MySQL gives access control management so that granting and revoking required access from the user is easy. Roles can also be defined with a list of permissions that can be granted or revoked for the user. All user passwords are stored in an encrypted format using plugin-specific algorithms. Scalability Day by day, the mountain of data is growing because of extensive use of technology in numerous ways. Because of this, load average is going through the roof. In some cases, it is unpredictable that data cannot exceed up to some limit or number of users will not go out of bounds. Scalable databases would be a preferable solution so that, at any point, we can meet unexpected demands to scale. MySQL is a rewarding database system for its scalability, which can scale horizontally and vertically; in terms of data, spreading database and load of application queries across multiple MySQL servers is quite feasible. It is pretty easy to add horsepower to the MySQL cluster to handle the load. An open source relational database management system MySQL is an open source database management system that makes debugging, upgrading, and enhancing the functionality fast and easy. You can view the source and make the changes accordingly and use it in your own way. You can also distribute an extended version of MySQL, but you will need to have a license for this. High performance MySQL gives high-speed transaction processing with optimal speed. It can cache the results, which boosts read performance. Replication and clustering make the  system scalable for more concurrency and manages the heavy workload. Database indexes also accelerate the performance of SELECT query statements for substantial amount of data. To enhance performance, MySQL 8 has included indexes in performance schema to speed up data retrieval. High availability Today, in the world of competitive marketing, an organization's key point is to have their system up and running. Any failure or downtime directly impacts business and revenue; hence, high availability is a factor that cannot be overlooked. MySQL is quite reliable and has constant availability using cluster and replication configurations. Cluster servers instantly handle failures and manage the failover part to keep your system available almost all the time. If one  server gets down, it will redirect the user's request to another node and perform the requested operation. Cross-platform capabilities MySQL provides cross-platform flexibility that can run on various platforms such as Windows, Linux, Solaris, OS2, and so on. It has great API support  for the all  major languages, which makes it very easy to integrate with languages such as  PHP, C++, Perl,  Python, Java, and so on. It is also part of the Linux Apache  MySQL PHP (LAMP) server that is used worldwide for web applications. That’s it then! We discussed few important reasons of MySQL being the most popular relational database in the world and is widely adopted across many enterprises. If you want to learn more about MySQL’s administrative features, make sure to check out the book MySQL 8 Administrator’s Guide today! 12 most common MySQL errors you should be aware of Top 10 MySQL 8 performance benchmarking aspects to know
Read more
  • 0
  • 0
  • 18996

article-image-what-coding-service
Antonio Cucciniello
02 Oct 2017
4 min read
Save for later

What is coding as a service?

Antonio Cucciniello
02 Oct 2017
4 min read
What is coding as a service? If you want to know what coding as a service is, you have to start with Artificial intelligence. Put simply, coding-as-a-service is using AI to build websites, using your machine to write code so you don't have to. The challenges facing engineers and programmers today In order to give you a solid understanding of what coding as a service is, you must understand where we are today. Typically, we have programs that are made by software developers or engineers. These programs are usually created to automate a task or make tasks easier. Think things that typically speed up processing or automate a repetitive task. This is, and has been, extremely beneficial. The gained productivity from the automated applications and tasks allows us, as humans and workers, to spend more time on creating important things and coming up with more ground breaking ideas. This is where Artificial Intelligence and Machine Learning come into the picture. Artificial intelligence and coding as a service Recently, with the gains in computing power that have come with time and breakthroughs, computers have became more and more powerful, allowing for AI applications to arise in more common practice. At this point today, there are applications that allow for users to detect objects in images and videos in real-time, translate speech to text, and even determine the emotions in the text sent by someone else. For an example of Artificial Intelligence Applications in use today, you may have used an Amazon Alexa or Echo Device. You talk to it, and it can understand your speech, and it will then complete a task based off your speech. Previously, this was a task given to only humans (the ability to understand speech.). Now with advances, Alexa is capable of understanding everything you say,given that it is "trained" to understand it. This development, previously only expected of humans, is now being filtered through to technology. How coding as a service will automate boring tasks Today, we have programmers that write applications for many uses and make things such as websites for businesses. As things progress and become more and more automated, that will increase programmer’s efficiency and will reduce the need for additional manpower. Coding as a service, other wise known as Caas, will result in even fewer programmers needed. It mixes the efficiencies we already have with Artificial Intelligence to do programming tasks for a user. Using Natural Language Processing to understand exactly what the user or customer is saying and means, it will be able to make edits to websites and applications on the fly. Not only will it be able to make edits, but combined with machine learning, the Caas can now come up with recommendations from past data to make edits on its own. Efficiency-wise, it is cheaper to own a computer than it is to pay a human especially when a computer will work around the clock for you and never get tired. Imagine paying an extremely low price (one than you might already pay to get a website made) for getting your website built or maybe your small application created. Conclusion Every new technology comes with pros and cons. Overall, the number of software developers may decrease, or, as a developer, this may free up your time from more menial tasks, and enable you to further specialize and broaden your horizons. Artificial Intelligence programs such as Coding as a Service could be spent doing plenty of the underlying work, and leave some of the heavier loading to human programmers. With every new technology comes its positives and negatives. You just need to use the postives to your advantage!
Read more
  • 0
  • 0
  • 18996
article-image-maria-ressa-on-astroturfs-that-turns-make-believe-lies-into-facts
Savia Lobo
15 Jun 2019
4 min read
Save for later

Maria Ressa on Astroturfs that turns make-believe lies into facts

Savia Lobo
15 Jun 2019
4 min read
The Canadian Parliament's Standing Committee on Access to Information, Privacy and Ethics hosted the hearing of the International Grand Committee on Big Data, Privacy and Democracy from Monday May 27 to Wednesday May 29.  Witnesses from at least 11 countries appeared before representatives to testify on how governments can protect democracy and citizen rights in the age of big data. This section of the hearing, which took place on May 28, includes Maria Ressa, CEO and Executive Editor, Rappler, who talks about how information is powerful and if molded into make-believe lies can turn these lies into facts. In her previous presentation, Maria gave a glimpse of this presentation where she said, “Information is power and if you can make people believe lies, then you can control them. Information can be used for commercial benefits as well as a means to gain geopolitical power.” She resumes by saying, the Philippines, we, here are a cautionary tale for you. An example of how quickly democracy crumbles, is eroded from within and how these information operations can take over the entire ecosystem and transform lies in the facts. If you can make people believe lies are facts then you can control them. “Without facts, you don't have the truth, without truth you don't have trust”, she says. Journalists have long been the gatekeepers for facts. When we come under attack, democracy is under attack and when this situation happens the voice with the loudest megaphone wins. She says that the Philippines is a petri dish for social media. She stated, as of January 2019, HootSuite has said that Filipinos spent the most time online and the most time on social media, globally. Facebook is our internet; however, it’s about introducing a virus into our information ecosystem. Over time, that virus lies masquerading as fact, that virus takes over the body politic and you need to develop a vaccine. That's what we're in search of and she says she does see a solution. “If social networks are your family and friends in the physical world, social media is your family and friends on steroids; no boundaries of time and space.” She showed that astroturfing is typically a three-prong attack. She has also demonstrated certain examples of how she was subject to an astroturf attack. In the long term, it’s education and you've heard from the other three witnesses before me exactly some of the things that can be done in the medium term i.e. media literacy. However, in the short term, it's only the social media platforms that can do something immediately and we're on the front lines, we need immediate help and immediate solution. She said, her company Rappler, is one of three fact-checking partners of Facebook in the Philippines and they do take that response really seriously. She further says, “We don't look at the content alone. Once we check to make sure that it is a lie, we look at the network that spreads the lies”. She says, the first step is to stop new viruses from entering the ecosystem. It is whack-a-mole if one only looks at the content. But when you begin to look at the networks that spread it and you have something that you can pull out. “It's very difficult to go through 90 hate messages per hour sustained over days and months,'' she said. That is what we're going through the kind of Astroturfing thing that turns lies into truth for us this is a matter of survival. To know more and listen to other questions asked by some other representatives, you can listen to the full hearing video titled, “Meeting No. 152 ETHI - Standing Committee on Access to Information, Privacy and Ethics” on ParlVU. ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee Zuckberg just became the target of the world's first high profile white hat deepfake op. Can Facebook come out unscathed? Facebook bans six toxic extremist accounts and a conspiracy theory organization
Read more
  • 0
  • 0
  • 18969

article-image-17-nov-17-handpicked-weekend-reading
Aarthi Kumaraswamy
18 Nov 2017
2 min read
Save for later

Handpicked for your Weekend Reading - 17th Nov '17

Aarthi Kumaraswamy
18 Nov 2017
2 min read
The weekend is here! You have got your laundry to do, binge on those Netflix episodes of your favorite show, catch up on that elusive sleep and go out with your friends and if you are married, then spending quality time with your family is also on your priority list. The last thing you want to do to spend hours shortlisting content that is worth your reading time. So here is a handpicked list of the best of Datahub published this week. Enjoy! 3 Things you should know that happened this week in News Introducing Tile: A new machine learning language with auto-generating GPU Kernels What we are learning from Microsoft Connect(); 2017 Tensorflow Lite developer preview is Here  Get hands-on with these Tutorials Implementing Object detection with Go using TensorFlow Machine Learning Algorithms: Implementing Naive Bayes with Spark MLlib Using R to implement Kriging – A Spatial Interpolation technique for Geostatistics data Do you agree with these Insights & Opinions? 3 ways JupyterLab will revolutionize Interactive Computing Of Perfect Strikes, Tackles and Touchdowns: How Analytics is Changing Sports 13 reasons why Exit Polls get it wrong sometimes Just relax and have fun reading these Date with Data Science Episode 04: Dr. Brandon explains ‘Transfer Learning’ to Jon Implementing K-Means Clustering in Python Scotland Yard style!      
Read more
  • 0
  • 0
  • 18844

article-image-introducing-intelligent-apps-a-smarter-way-into-the-future
Amarabha Banerjee
19 Oct 2017
6 min read
Save for later

Introducing Intelligent Apps

Amarabha Banerjee
19 Oct 2017
6 min read
We are a species obsessed with ‘intelligence’ since gaining consciousness. We have always been inventing ways to make our lives better through sheer imagination and application of our intelligence. Now, it comes as no surprise that we want our modern day creations to be smart as well - be it a web app or a mobile app. The first question that comes to mind then is what makes an application ‘intelligent’? A simple answer for budding developers is that intelligent apps are apps that can take intuitive decisions or provide customized recommendations/experience to their users based on insights drawn from data collected from their interaction with humans. This brings up a whole set of new questions: How can intelligent apps be implemented, what are the challenges, what are the primary application areas of these so-called Intelligent apps, and so on. Let’s start with the first question. How can intelligence be infused into an app? The answer has many layers just like an app does. The monumental growth in data science and its underlying data infrastructure has allowed machines to process, segregate and analyze huge volumes of data in limited time. Now, it looks set to enable machines to glean meaningful patterns and insights from the very same data. One such interesting example is predicting user behavior patterns. Like predicting what movies or food or brand of clothing the user might be interested in, what songs they might like to listen to at different times of their day and so on. These are, of course, on the simpler side of the spectrum of intelligent tasks that we would like our apps to perform. Many apps currently by Amazon, Google, Apple, and others are implementing and perfecting these tasks on a day-to-day basis. Complex tasks are a series of simple tasks performed in an intelligent manner. One such complex task would be the ability to perform facial recognition, speech recognition and then use it to perform relevant daily tasks, be it at home or in the workplace. This is where we enter the realm of science fiction where your mobile app would recognise your voice command while you are driving back home and sends automated instructions to different home appliances, like your microwave, AC, and your PC so that your food is served hot when you reach home, your room is set at just the right temperature and your PC has automatically opened the next project you would like to work on. All that happens while you enter your home keys-free thanks to a facial recognition software that can map your face and ID you with more than 90% accuracy, even in low lighting conditions. APIs like IBM Watson, AT&T Speech, Google Speech API, the Microsoft Face API and some others provide developers with tools to incorporate features such as those listed above, in their apps to create smarter apps. It sounds almost magical! But is it that simple? This brings us to the next question. What are some developmental challenges for an intelligent app? The challenges are different for both web and mobile apps. Challenges for intelligent web apps For web apps, choosing the right mix of algorithms and APIs that can implement your machine learning code into a working web app, is the primary challenge. plenty of Web APIs like IBM Watson, AT&T speech etc. are available to do this. But not all APIs can perform all the complex tasks we discussed earlier. Suppose you want an app that successfully performs both voice and speech recognition and then also performs reinforcement learning by learning from your interaction with it. You will have to use multiple APIs to achieve this. Their integration into a single app then becomes a key challenge. Here is why. Every API has its own data transfer protocols and backend integration requirements and challenges. Thus, our backend requirement increases significantly, both in terms of data persistence and dynamic data availability and security. Also, the fact that each of these smart apps would need customized user interface designs, poses a challenge to the front end developer. The challenge is to make a user interface so fluid and adaptive that it supports the different preferences of different smart apps. Clearly, putting together a smart web app is no child’s play. That’s why, perhaps, smart voice-controlled apps like Alexa are still merely working as assistants and providing only predefined solutions to you. Their ability to execute complex voice-based tasks and commands is fairly low, let alone perform any non-voice command based task. Challenges for intelligent mobile apps For intelligent mobile apps, the challenges are manifold. A key reason is network dependency for data transfer. Although the advent of 4G and 5G mobile networks has greatly improved mobile network speed, the availability of network and the data transfer speeds still pose a major challenge. This is due to the high volumes of data that intelligent mobile apps require to perform efficiently. To circumvent this limitation, vendors like Google are trying to implement smarter APIs in the mobile’s local storage. But this approach requires a huge increase in the mobile chip’s computation capabilities - something that’s not currently available. Maybe that’s why Google has also hinted at jumping into the chip manufacturing business if their computation needs are not met. Apart from these issues, running multiple intelligent apps at the same time would also require a significant increase in the battery life of mobile devices. Finally, comes the last question. What are some key applications of intelligent apps? We have explored some areas of application in the previous sections keeping our focus on just web and mobile apps. Broadly speaking, whatever makes our daily life easier, is ideally a potential application area for intelligent apps. From controlling the AC temperature automatically to controlling the oven and microwave remotely using the vacuum cleaner (of course the vacuum cleaner has to have robotic AI capabilities) to driving the car, everything falls in the domain of intelligent apps. The real questions for us are What can we achieve with our modern computation resources and our data handling capabilities? How can mobile computation capabilities and chip architecture be improved drastically so that we can have smart apps perform complex tasks faster and ease our daily workflow? Only the future holds the answer. We are rooting for the day when we will rise to become a smarter race by delegating lesser important yet intelligent tasks to our smarter systems by creating intelligent web and mobile apps efficiently and effectively. The culmination of these apps along with hardware driven AI systems could eventually lead to independent smart systems - a topic we will explore in the coming days.
Read more
  • 0
  • 0
  • 18787
article-image-what-can-happen-when-artificial-intelligence-decides-on-your-loan-request
Guest Contributor
23 Feb 2019
5 min read
Save for later

What can happen when artificial intelligence decides on your loan request

Guest Contributor
23 Feb 2019
5 min read
As the number of potential borrowers continues to rapidly grow, loan companies and banks are having a bad time trying to figure out how likely their customers are to pay back. Probably, getting information on clients’ creditworthiness is the greatest challenge for most financial companies, and it especially concerns those clients who don’t have any credit history yet. There is no denying that the alternative lending business has become one of the most influential financial branches both in the USA and Europe. Debt is a huge business of our days that needs a lot of resources. In such a challenging situation, any means that can improve productivity and reduce the risk of mistake while performing financial activities are warmly welcomed. This is actually how Artificial Intelligence became the redemption for loan providers. Fortunately for lenders, AI successfully deals with this task by following the borrowers’ digital footprint. For example, some applications for digital lending collect and analyze an individual’s web browsing history (upon receiving their personal agreement on the use of this information). In some countries such as China and Africa, they may also look through their social network profiles, geolocation data, and the messages sent to friends and family, counting the number of punctuation mistakes. The collected information helps loan providers make the right decision on their clients’ creditworthiness and avoid long loan processes. When AI Overfits Unfortunately, there is the other side of the coin. There’s a theory which states that people who pay for their gas inside the petrol station, not at the pump, are usually smokers. And that is the group whose creditworthiness is estimated to be low. But what if this poor guy simply wanted to buy a Snickers? This example shows that if a lender leaves without checking the information carefully gathered by AI software, they may easily end up with making bad mistakes and misinterpretations. Artificial Intelligence in the financial sector may significantly reduce costs, efforts, and further financial complications, but there are hidden social costs such as the above. A robust analysis, design, implementation and feedback framework is necessary to meaningfully counter AI bias. Other Use Cases for AI in Finances Of course, there are also enough examples of how AI helps to improve customer experience in the financial sector. Some startups use AI software to help clients find the company that is the best at providing them with the required service. They juxtapose the clients’ requirements with the companies’ services finding perfect matches. Even though this technology reminds us of how dating apps work, such applications can drastically save time for both parties and help borrowers pay faster. AI can also be used for streamlining finances. AI helps banks and alternative lending companies in automating some of their working processes such as basic customer service, contract management, or transactions monitoring. A good example is Upstart, the pet project of two former Google employees. The startup was originally aimed to help young people lacking the credit history, to get a loan or any other kind of financial support. For this purpose, the company uses the clients’ educational background and experience, taking into account things such as their attained degrees and school/university attendance. However, such approach to lending may end up being a little snobbish: it can simply overlook large groups of population who can’t afford higher education. As a result of insufficient educational background, these people can become deprived of the opportunity to get their loan. Nonetheless, one of the main goals of the company was automating as many of its operating procedures as possible. By 2018, more than 60% of all their loans had been fully automated with more to come. We cannot automate fairness and opportunity, yet The implementation of machine learning in providing loans by checking the digital footprint of people may lead to ethical and legal disputes. Even today some people state that the use of AI in the financial sector encouraged inequality in the number of loans provided to the black and white population of the USA. They believe that AI continues the bias against minorities and make the black people “underbanked.” Both lending companies and banks should remember that the quality of work done these days with the help of machine learning methods highly depends on people—both employees who use the software and AI developers who create and fine-tune it. So we should see AI in loan management as a useful tool—but not as a replacement for humans. Author Bio Darya Shmat is a business development representative at Iflexion, where Darya expertly applies 10+ years of practical experience to help banking and financial industry clients find the right development or QA solution. Blockchain governance and uses beyond finance – Carnegie Mellon university podcast Why Retailers need to prioritize eCommerce Automation in 2019 Glancing at the Fintech growth story – Powered by ML, AI & APIs
Read more
  • 0
  • 0
  • 18768

article-image-top-14-cryptocurrency-trading-bots
Guest Contributor
21 Jun 2018
9 min read
Save for later

Top 14 Cryptocurrency Trading Bots - and one to forget

Guest Contributor
21 Jun 2018
9 min read
Men in rags became millionaires and rich people bite the dust within minutes, thanks to crypto currencies. According to a research, over 1500 crypto currencies are being traded globally and with over 6 million wallets, proving that digital currency is here not just to stay but to rule. The rise and fall of crypto market isn’t hidden from anyone but the catch is—cryptocurrency still sells like a hot cake. According to Bill Gates, “The future of money is digital currency”. With thousands of digital currencies rolling globally, crypto traders are immensely occupied and this is where cryptocurrency trading bots come into play. They ease out the currency trade and research process that results in spending less effort and earning more money not to mention the hours saved. According to Eric Schmidt, ex CEO of Google, “Bitcoin is a remarkable cryptographic achievement and the ability to create something that is not duplicable in the digital world has enormous value.” The crucial part is - whether the crypto trading bot is dependable and efficient enough to deliver optimum results within crunch time. To make sure you don't miss an opportunity to chip in cash in your digital wallet, here are the top 15 crypto trading bots ranked according to the performance: 1- Gunbot Gunbot is a crypto trading bot that boasts of detailed settings and is fit for beginners as well as professionals. Along with making custom strategies, it comes with a“Reversal Trading” feature. It enables continuous trading and works with almost all the exchanges (Binance, Bittrex, GDAX, Poloniex, etc). Gunbot is backed by thousands of users that eventually created an engaging and helpful community. While Gunbot offers different packages with price tags of 0.02 to 0.15 BTC, you can always upgrade them. The bot comes with a lifetime license and is constantly upgraded. Haasbot Hassonline created this cryptocurrency trading bot in January 2014. Its algorithm is very popular among cryptocurrency geeks. It can trade over 500 altcoins and bitcoins on famous exchanges such as BTCC, Kraken, Bitfinex, Huobi, Poloniex, etc. You need to put a little input of the currency and the bot will do all the trading work for you. Haasbot is customizable and has various technical indicator tools. The cryptocurrency trading bot also recognizes candlestick patterns. This immensely popular trading bot is priced between 0.12BTC and 0.32 BTC for three months. 3- Gekko Gekko is a cryptocurrency trading bot that supports over 18 Bitcoin exchanges including Bitstamp, Poloniex, Bitfinex, etc. This bot is a backtesting platform and is free for use. It is a full fledged open source bot that is available on the GitHub. Using this bot is easy as it comes with basic trading strategies. The webinterface of Gekko was written from scratch and it can run backtests, visualize the test results while you monitor your local data with it. Gekko updates you on the go using plugins for Telegram, IRC, email and several different platforms. The trading bot works great with all operating systems such as Windows, Linux and macOS. You can even run it on your Raspberry PI and cloud platforms. 4- CryptoTrader CyrptoTrader is a  cloud-based platform which allows users to create automated algorithmic trading programs in minutes. It is one of the most attractive crypto trading bot and you wont need to install any unknown software with this bot. A highly appreciated feature of CryptoTrader is its Strategy Marketplace where users can trade strategies. It supports major currency exchanges such as Coinbase, Bitstamp, BTCe and is supported for live trading and backtesting. The company claims its cloud based trading bots are unique as compared with the currently available bots in the market. 5- BTC Robot One of the very initial automated crypto trading bot, BTC Robot offers multiple packages for different memberships and software. It provides users with a downloadable version of Windows. The minimum robot plan is of $149. BTC Robot sets up quite easily but it is noted that its algorithms aren't great at predicting the markets. The user mileage in BTC Robot varies heavily leaving many with mediocre profits. With the trading bot’s fluctuating evaluation, the profits may go up or down drastically depending on the accuracy of algorithm. On the bright side the bot comes with a sixty day refund policy that makes it a safe buy. 6- Zenbot Another open source trading bot for bitcoin trading, Zenbot can be downloaded and its code can be modified too. This trading bot hasn't got an update in the past months but still, it is among one of the few bots that can perform high frequency trading while backing up multiple assets at a time. Zenbot is a lightweight artificially intelligent crypto trading bot and supports popular exchanges such as Kraken, GDAX, Poloniex, Gemini, Bittrex, Quadriga, etc. Surprisingly, according to the GitHub’s page, Zenbot’s version 3.5.15 bagged an ROI of 195% in just a mere period of three months. 7- 3Commas 3Commas is a famous cryptocurrency trading bot that works well with various exchanges including Bitfinex, Binance, KuCoin, Bittrex, Bitstamp, GDAX, Huiboi, Poloniex and YOBIT. As it is a web based service, you can always monitor your trading dashboard on desktop, mobile and laptop computers. The bot works 24/7 and it allows you to take-profit targets and set stop-loss, along with a social trading aspect that enables you to copy the strategies used by successful traders. ETF-Like feature allows users to analyze, create and back-test a crypto portfolio and pick from the top performing portfolios created by other people. 8- Tradewave Tradewave is a platform that enables users to develop their own cryptocurrency trading bots along with automated trading on crypto exchanges. The bot trades in the cloud and uses Python to write the code directly in the browser. With Tradewave, you don't have to worry about the downtime. The bot doesn't force you to keep your computer on 24x7 nor it glitches if not connected to the internet. Trading strategies are often shared by community members that can be used by others too. However, it currently supports very few cryptocurrency exchanges such as Bitstamp and BTC-E but more exchanges will be added in coming months. 9- Leonardo Leonardo is a cryptocurrency trading bot that supports a number of exchanges such as Bittrex, Bitfinex, Poloniex, Bitstamp, OKCoin, Huobi, etc. The team behind Leonardo is extremely active and new upgrades including plugins are in the funnel. Previously, it cost 0.5 BTC but currently, it is available for $89 with a license of single exchange. Leonardo boasts of two trading strategy bots including Ping Pong Strategy and Margin Maker Strategy. The first strategy enables users to set the buy and sell price leaving all of the other plans to the bot while the Margin Maker strategy can buy and sell on price adjusted according to the direction in the market. This trading bot stands out in terms of GUI. 10- USI Tech USI Tech is a trading bot that is majorly used for forex trading but it also offers BTC packages. While majority of trading bots require an initial setup and installation, USI uses a different approach and it isn't controlled by the users. Users are needed to buy-in from their expert mining and bitcoin trade connections and then, the USI Tech bot guarantees a daily profit from the transactions and trade. To earn one percent of the capital daily, customers are advised to choose feature rich plans.. 11- Cryptohopper Cryptohopper  is a 24/7 cloud based trading bot that means it doesn't matter  if you are on the computer or not. Its system enables users to trade on technical indicators with subscription to a signaler who sends buy signals. According to the Cryptohopper’s website, it is the first crypto trading bot that is integrated with professional external signals. The bot helps in leveraging bull markets and has a latest dashboard area where users can monitor and configure everything. The dashboard also includes a configuration wizard for the major exchanges including Bittrex, GDAX, Kraken,etc. 12- My Bitcoin Bot MBB is a team effort from Brad Sheridon and his proficient teammates who are experts of cryptocurrency investment. My Bitcoin Bot is an automated trading software that can be accessed by anyone who is ready to pay for it. While the monthly plan is of $39 a month, the yearly subscription for this auto-trader bot is available for of $297. My bitcoin bot comes with heaps of advantages such as unlimited technical support, free software updates, access to trusted brokers list, etc. 13- Crypto Arbitrager A standalone application that operates on a dedicated server, Crypto Arbitrager can leverage robots even when the PC is off. The developers behind this cryptocurrency trading bot claim that this software uses code integration of financial time series. Users can make money from the difference in rates of Litecoins and Bitcoins. By implementing the advanced strategy of hedge funds, the trading bot effectively manages savings of users regardless of the state of the cryptocurrency market. 14- Crypto Robot 365 Crypto Robot 365 automatically trades your digital currency. It buys and sells popular cryptocurrencies such as Ripple, Bitcoin, ethereum, Litecoin, Monero, etc. Rather than a signup fee, this platform charges its commision on a per trade basis. The platform is FCA-Regulated and offers a realistic achievable win ratio. According to the trading needs, users can tweak the system. Moreover, it has an established trading history and  it even offers risk management options. Down The Line While cryptocurrency trading is not a piece of cake, trading with currency bots may be confusing for many. The aforementioned trading bots are used by many and each is backed by years of extensive hard work. With reliability, trustworthiness, smartwork and proactiveness being top reasons for choosing any cryptocurrency trading bot, picking up a trading bot is a hefty task. I recommend you experiment with small amount of money first and if your fate gets to a shining start, pick the trading bot that perfectly suits your way of making money via cryptocurrency. About the Author Rameez Ramzan is a Senior Digital Marketing Executive of Cubix - mobile app development company.  He specializes in link building, content marketing, and site audits to help sites perform better. He is a tech geek and loves to dwell on tech news. Crypto-ML, a machine learning powered cryptocurrency platform Beyond the Bitcoin: How cryptocurrency can make a difference in hurricane disaster relief Apple changes app store guidelines on cryptocurrency mining
Read more
  • 0
  • 1
  • 18723
Modal Close icon
Modal Close icon