Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-the-decentralized-web-trick-or-treat
Bhagyashree R
31 Oct 2018
3 min read
Save for later

The decentralized web - Trick or Treat?

Bhagyashree R
31 Oct 2018
3 min read
The decentralized web refers to a web which is not dominated by powerful monopolies. It’s actually a lot like the web we have now, but with one key difference: its underlying architecture is decentralized, so that it becomes much difficult for any one entity to take down any single web page, website, or service. It takes control away from powerful tech monopolies. Why are people excited about the decentralized web? In effect, the decentralized web is a lot like the earliest version of the web. It aims to roll back the changes that came with Web 2.0, as we began to communicate with each other and share information through centralized services provided by big companies such as Google, Facebook, Microsoft, and Amazon. The decentralized web aims to make us less dependent on these tech giants. Instead, users will have control over their data enabling them to directly interact and exchange messages with others in their network. Blockchain offers a perfect solution to helping us achieve a decentralized web. By creating a decentralized public digital ledger of transactions, you can take the power out of established monopolies and back to those who are simply part of the decentralized network. We saw some advancements in the direction of decentralized web with the launch of Tim Berners-Lee’s startup, Inrupt. The goal of this startup is to get rid of the tech giant’s monopolies on user data. Tim Berners-Lee hopes to achieve this with the help of his open source project, Solid.  Solid provides every user a choice of where they want to store their data, which specific people and groups can access the select elements in a data, and which apps you use. Further examples are Cloudflare introducing IPFS Gateway, which allows you to easily access content from InterPlanetary File System (IPFS), and, more recently, Origin DApp, which is a true peer to peer marketplace on the Ethereum blockchain with origin-js. A note of caution Despite these advances, the decentralized web is still in its infancy. There are still no “killer apps” that promises the same level of features that are we used to now. Many of the apps that do exist are clunky and difficult to use. One of the promises that decentralized makes is being faster, but there is a long way to go on that. There are much bigger issues related to governance such as how the decentralized web will come together when no one is in charge and what is the guarantee that it will not become centralized again. Is the decentralized web a treat… or a trick? Going by the current status of decentralized web, it seems to be a trick. No one likes “change” and it takes a long time to get used to the change. The decentralized web has to offer much more to replace the current functionalities we enjoy. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Origin DApp: A decentralized marketplace on Ethereum mainnet aims to disrupt gig economy platforms like Airbnb and Uber Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data”
Read more
  • 0
  • 0
  • 24044

article-image-crypto-ransomware
Savia Lobo
23 May 2018
7 min read
Save for later

Anatomy of a Crypto Ransomware

Savia Lobo
23 May 2018
7 min read
Crypto ransomware is the worst threat at present. There are a lot of variants in crypto ransomware. Only some make it into the limelight, while others fade away. In this article, you will get to know about Crypto Ransomware and how one can code it easily in order to encrypt certain directories and important files. The reason for a possible increase in the use of crypto ransomware could be because coding it is quite easy compared to other malware. The malware just needs to browse through user directories to find relevant files that are likely to be personal and encrypt them. The malware author need not write complex code, such as writing hooks to steal data. Most crypto ransomwares don't care about hiding in the system, so most do not have rootkit components either. They only need to execute on the system once to encrypt all files. Some crypto ransomwares also check to see whether the system is already infected by other crypto ransomware. There is a huge list of crypto ransomware. Here are a few of them: Locky Cerber CryptoLocker Petya This article is an excerpt taken from the book, 'Preventing Ransomware' written by Abhijit Mohanta, Mounir Hahad, and Kumaraguru Velmurugan.  How does crypto ransomware work? Crypto ransomware technically does the following things: Finds files on the local system. On a Windows machine, it can use the FindFirstFile(), FindNextFile() APIs to enumerate files directories. A lot of ransomware also search for files present on shared drives It next checks for the file extension that it needs to encrypt. Most have a hardcoded list of file extensions that the ransomware should encrypt. Even if it encrypts executables, it should not encrypt any of the system executables. It makes sure that you should not be able to restore the files from backup by deleting the backup. Sometimes, this is done by using the vssadmin tool. A lot of crypto ransomwares use the vssadmin command, provided by Windows to delete shadow copies. Shadow copies are backups of files and volumes. The vssadmin (vss administration) tool is used to manage shadow copies. VSS in is the abbreviation of volume shadow copy also termed as Volume Snapshot Service. The following is a screenshot of the vssadmin tool: After encrypting the files ransomware leaves a note for the victim. It is often termed a ransom note and is a message from the ransomware to the victim. It usually informs the victim that the files on his system have been encrypted and to decrypt them, he needs to pay a ransom. The ransom note instructs the victim on how to pay the ransom. The ransomware uses a few cryptographic techniques to encrypt files, communicate with the C&C server, and so on. We will explain this in an example in the next section. But before that, it's important to take a look at the basics of cryptography. Overview of cryptography A lot of cryptographic algorithms are used by malware today. Cryptography is a huge subject in itself and this section just gives a brief overview of cryptography. Malware can use cryptography for the following purposes: To obfuscate its own code so that antivirus or security researchers cannot identify the actual code easily. To communicate with its own C&C server, sometimes to send hidden commands across the network and sometimes to infiltrate and steal data To encrypt the files on the victim machine A cryptographic system can have the following components: Plaintext Encryption key Ciphertext, which is the encrypted text Encryption algorithm, also called cipher Decryption algorithm There are two types of cryptographic algorithms based on the kind of key used: Symmetric Asymmetric A few assumptions before explaining the algorithm: the sender is the person who sends the data after encrypting it and the receiver is the person who decrypts the data with a key. Symmetric key In symmetric key encryption, the same key is used by both sender and receiver, which is also called the secret key. The sender uses the key to encrypt the data while the receiver uses the same key to decrypt. The following algorithms use a symmetric key: RC4 AES DES 3DES BlowFish Asymmetric key A symmetric key is simpler to implement but it faces the problem of exchanging the keys in a secure manner. A public or asymmetric key has overcome the problem of key exchange by using a pair of keys: public and private. A public key can be distributed in an unsecured manner, while the private key is always kept with the owner secretly. Any one of the keys can be used to encrypt and the other can be used to decrypt: Here, the most popular algorithms are: RSA Diffie Hellman ECC DSA Secure protocols such as SSH have been implemented using public keys. How does ransomware use cryptography? Crypto ransomware started with simple symmetric key cryptography. But soon, researchers could decode these keys easily. So, they started using an asymmetric key. Ransomware of the current generation has started using both symmetric and asymmetric keys in a smart manner. CryptoLocker is known to use both a symmetric key and an asymmetric key. Here is the encryption process used by CryptoLocker: When CryptoLocker infects a machine, it connects to its C&C and requests a public key. An RSA public and secret key pair is generated for that particular victim machine. The public key is sent to the victim machine but the secret key or private key is retained with the C&C server. The ransomware on the victim machine generates an AES symmetric key, which is used to encrypt files. After encrypting a file with AES key, CryptoLocker encrypts the AES key with the RSA public key obtained from C&C server. The encrypted AES key along with the encrypted file contents are written back to the original file in a specific format. So, in order to get the contents back, we need to decrypt the encrypted AES key, which can only be done using the private key present in the C&C server. This makes decryption close to impossible. Analyzing crypto ransomware The malware tools and concepts remain the same here too. Here are few observations while analyzing, specific to crypto ransomwares, that are different compared to other malware. Usually, crypto ransomware, if executed, does a large number of file modifications. You can see the changes in the filemon or procmon tools from Sysinternals File extensions are changed in a lot of cases. In this case, it is changed to .scl. The extension will vary with different crypto ransomware. A lot of the time, a file with a ransom note is present on the system. The following image shows a file with a ransom note: Ransom notes are different for different kinds of ransomware. Ransom notes can be in HTML, PDF, or text files. The ransom note's file usually has decrypt instructions in the filename. Prevention and removal techniques for crypto ransomware In this case, prevention is better than cure. It's hard to decrypt the encrypted files in most cases. Security vendors came up with decryption tool to decrypt the ransomware encrypted files. There was a large increase in the number of ransomware and an increase in complexity of the encryption algorithms used by them. Hence, the decryption tools created by the ransomware vendors failed to cope sometimes. http://www.thewindowsclub.com/list-ransomware-decryptor-tools gives you a list of tools meant to decrypt ransomware encrypted files. These tools may not work in all cases of ransomware encryption. If you've enjoyed reading this post, do check out  'Preventing Ransomware' to have an end-to-end knowledge of the trending malware in the tech industry at present. Top 5 cloud security threats to look out for in 2018 How cybersecurity can help us secure cyberspace Cryptojacking is a growing cybersecurity threat, report warns
Read more
  • 0
  • 0
  • 24039

article-image-aiops-trick-or-treat
Bhagyashree R
31 Oct 2018
2 min read
Save for later

AIOps - Trick or Treat?

Bhagyashree R
31 Oct 2018
2 min read
AIOps, as the term suggests, is Artificial Intelligence for IT operations and was first introduced by Gartner last year. AIOps systems are used to enhance and automate a broad range of processes and tasks in IT operations with the help of big data analytics, machine learning, and other AI technologies. Read also: What is AIOps and why is it going to be important? In its report, Gartner estimated that, by 2020, approximately 50% of enterprises will be actively using AIOps platforms to provide insight into both business execution and IT Operations. AIOps has seen a fairly fast growth since its introduction with many big companies showing interest in AIOps systems. For instance, last month Atlassian acquired Opsgenie, an incident management platform that along with planning and solving IT issues, helps you gain insight to improve your operational efficiency. The reasons why AIOps is being adopted by companies are: it eliminates tedious routine tasks, minimizes costly downtime, and helps you gain insights from data that’s trapped in silos. Where AIOps can go wrong? AIOps alerts us about incidents beforehand, but in some situations, it can also go wrong. In cases where the event is unusual, the system will be less likely to predict it. Also, those events that haven’t occurred before will be entirely outside the ability for machine learning to predict or analyze. Additionally, it can sometimes give false negatives and false positives. False negatives could happen in the cases where the tests are not sensitive enough to detect possible issues. False positives can be the result of incorrect configuration. This essentially means that there will always be a need for human operators to review these alerts and warnings. Is AIOps a trick or treat? AIOps is bringing more opportunities for IT workforce such as AIOps Data Scientist, who will focus on solutions to correlate, consolidate, alert, analyze, and provide awareness of events. Dell defines its Data Scientist role as someone who will “contribute to delivering transformative AIOps solutions on their SaaS platform”. With AIOps, IT workforce won’t just disappear, it will evolve. AIOps is definitely a treat because it reduces manual work and provides an intuitive way of incident response. What is AIOps and why is it going to be important? 8 ways Artificial Intelligence can improve DevOps Tech hype cycles: do they deserve your attention?
Read more
  • 0
  • 0
  • 23894

article-image-5-things-to-remember-when-implementing-devops
Erik Kappelman
05 Dec 2017
5 min read
Save for later

5 things to remember when implementing DevOps

Erik Kappelman
05 Dec 2017
5 min read
DevOps is a much more realistic and efficient way to organize the creation and delivery of technology solutions to customers. But like practically everything else in the world of technology, DevOps has become a buzzword and is often thrown around willy-nilly. Let's cut through the fog and highlight concrete steps that will help an organization implement DevOps. DevOps is about bringing your development and operations teams together This might seem like a no-brainer, but DevOps is often explained in terms of tools rather than techniques or philosophical paradigms. At its core, DevOps is about uniting developers and operators, getting these groups to effectively communicate with each other, and then using this new communication to streamline various processes. This could include a physical change to the layout of an organization's workspace. It's incredible the changes that can happen just by changing the seating arrangements in an office. If you have a very large organization, development and operations might be in separate buildings, separate campuses, or even separate cities. While the efficacy of web-based communication has increased dramatically over the last few years, there is still no replacement for face-to-face daily human interactions. Putting developers and operators in the same physical space is going to increase the rate of adoption and efficacy of various DevOps tools and techniques. DevOps is all about updates Updates can be aimed at expanding functionality or simply fixing or streamlining existing processes. Updates present a couple of problems to developers and operators. First, we need to keep everybody working on the same codebase. This can be achieved by using a variety of continuous integration tools. The goal of continuous integration is to make sure that changes and updates to the codebase are implemented as close to continuously as possible. This helps avoid merging problems that can result from multiple developers working on the same codebase at the same time. Second, these updates need to be integrated into the final product. For this task, DevOps applies the concept of continuous deployment. This is essentially the same thing as continuous integration, but has to do with deploying changes to the codebase as opposed to integrating changes to the codebase. In terms of importance to the DevOps process, continues integration and deployment are equally important. Moving updates from a developer's workspace to the codebase to production should be seamless, smooth, and continuous. Implementing a microservices structure is imperative for an effective DevOps approach Microservices are an extension of the service-based structure. Basically a service structure calls for modulation of a solution’s codebase into units based on functionality. Microservices takes this a step further by implementing what consists of a service-based structure in which each service performs a single task. While a service-based or microservice structure is not required for implementation of DevOps, I have no idea why you wouldn’t because microservices lend themselves so well with DevOps. One way to think of a microservice structure is by imagining an ant hill in which all of the worker ants are microservices. Each ant has a specific set of abilities and is given a task from the queen. The ant then autonomously performs this task, usually gathering food, along with all of its ant friends. Remove a single ant from the pile, nothing really happens. Replace an old ant with a new ant, nothing really happens. The metaphor isn’t perfect, but it strikes at the heart of why microservices are valuable in a DevOps framework. If we need to be continuously integrating and deploying, shouldn’t we try to impact the codebase as directly as we can? When microservices are in use, changes can be made at an extremely granular level. This allows for continuous integration and deployment to really shine. Monitor your DevOps solutions In order to continuously deploy, applications need to also be continuously monitored. This allows for problems to be identified quickly. When problems are quickly identified, it tends to reduce the total effort required to fix the problems. Your application should obviously be monitored from the perspective of whether or not it is working as it currently should, but users need to be able to give feedback on the application’s functionality. When reasonable, this feedback can then be integrated into the application somehow. Monitoring user feedback tends to fall by the wayside when discussing DevOps. It shouldn’t. The whole point of the DevOps process is to improve the user experience. If you’re not getting feedback from users in a timely manner, it's kind of impossible to improve their experience. Keep it loose and experiment Part of the beauty of DevOps is that it can allow for more experimentation than other development frameworks. When microservices and continuous integration and deployment are being fully utilized, it's fairly easy to incorporate experimental changes to applications. If an experiment fails, or doesn’t do exactly what was expected, it can be removed just as easily. Basically, remember why DevOps is being used and really try to get the most out of it. DevOps can be complicated. Boiling anything down to five steps can be difficult but if you act on these five fundamental principles you will be well on your way to putting DevOps into practice. And while its fun to talk about what DevOps is and isn't, ultimately that's the whole point - to actually uncover a better way to work with others.
Read more
  • 0
  • 0
  • 23888

article-image-grover-a-gan-that-fights-neural-fake-news-as-long-as-it-creates-said-news
Vincy Davis
11 Jun 2019
7 min read
Save for later

GROVER: A GAN that fights neural fake news, as long as it creates said news

Vincy Davis
11 Jun 2019
7 min read
Last month, a team of researchers from the University of Washington and the Allen Institute for Artificial Intelligence, published a paper titled ‘Defending Against Neural Fake News’. The goal of this paper is to reliably detect “neural fake news”, so that its harm can be minimized. With this regard, the researchers have built a model named ‘GROVER’. This works as a generator of fake news, which can also spot its own generated fake news articles, as well as those generated by other AI models. GROVER (Generating aRticles by Only Viewing mEtadata Records) models can generate an efficient yet controllable news article, with not only the body, but also the title, news source, publication date, and author list. The researchers affirm that the ‘best models for generating neural disinformation are also the best models at detecting it’. The framework for GROVER represents fake news generation and detection as an adversarial game: Adversary This system will generate fake stories that match specified attributes: generally, being viral or persuasive. The stories must be realistic to read for both human users as well as the verifier. Verifier This system will classify news stories as real or fake. A verifier will have access to unlimited real news stories and few fake news stories from a specific adversary. The dual objective of these two systems suggest an escalating ‘arms race’ between attackers and defenders. It is expected that as the verification systems get better, the adversaries too will follow. Modeling Conditional Generation of Neural Fake News using GROVER GROVER adopts a language modeling framework which allows for flexible decomposition of an article in the order of p(domain, date, authors, headline, body). During inference time, a set of fields are set as ‘F’ for context, with each field ‘f ‘ containing field-specific start and end tokens. During training, the inference is simulated by randomly partitioning an article’s fields into two disjoint sets F1 and F2. The researchers also randomly drop out individual fields with probability 10%, and drop out all but the body with probability 35%. This allows the model to learn how to perform unconditional generation. For Language Modeling, two evaluation modes are considered: unconditional, where no context is provided and the model must generate the article body; and conditional, in which the full metadata is provided as context. The researchers evaluate the quality of disinformation generated by their largest model, GROVER-Mega, using p=.96. The articles are classified into four classes: human-written articles from reputable news websites (Human News), GROVER-written articles conditioned on the same metadata (Machine News), human-written articles from known propaganda websites (Human Propaganda), and GROVER-written articles conditioned on the propaganda metadata (Machine Propaganda). Image Source: Defending Against Neural Fake News When rated by qualified workers on Amazon Mechanical Turk, it was found that though the quality of GROVER-written news is not as high as human-written news, it is very skilled at rewriting propaganda. The overall trustworthiness score of propaganda increases from 2.19 to 2.42 (out of 3) when rewritten by GROVER. Neural Fake News Detection using GROVER The role of the Verifier is to mitigate the harm of neural fake news by classifying articles as Human or Machine written. The neural fake news detection is framed in a semi-supervised method. The neural verifier (or discriminator) will have access to many human-written news articles from March 2019 and before, i.e., the entire RealNews training set. However, it will   have limited access to generations, and more recent news articles. For example, using 10k news articles from April 2019, for generating article body text; another 10k articles are used as a set of human-written news articles, it is split in a balanced way, with 10k for training, 2k for validation, and 8k for testing. It is evaluated using two modes: In the unpaired setting, a verifier is provided single news articles, which must be classified independently as Human or Machine.  In the paired setting, a model is given two news articles with the same metadata, one real and one machine-generated. The verifier must assign the machine-written article a higher Machine probability than the human-written article. Both the modes are evaluated in terms of accuracy. Image Source: Defending Against Neural Fake News It was found that the paired setting appears significantly easier than the unpaired setting across the board, suggesting that it is often difficult for the model to calibrate its predictions. Second, model size is highly important in the arms race between generators and discriminators. Using GROVER to discriminate GROVER’s generations results in roughly 90% accuracy across the range of sizes. If a larger generator is used, accuracy slips below 81%; conversely, if the discriminator is larger, accuracy is above 98%. Lastly, other discriminators perform worse than GROVER overall. This suggests that effective discrimination requires having a similar inductive bias, as the generator. Thus it has been found that GROVER can rewrite propaganda articles, with humans rating the rewritten versions as more trustworthy. At the same time, GROVER can also defend these models. The researchers are of the opinion that an ensemble of deep generative model, such as GROVER should be used to analyze the content of a text. Obviously the working of the GROVER model has caught many people’s attention. https://twitter.com/str_t5/status/1137108356588605440 https://twitter.com/currencyat/status/1137420508092391424 While some are finding this to be an interesting mechanism to combat fake news, others point out that, it doesn't matter if GROVER can identify its own texts, if it can't identify the texts generated by other models. Releasing a model like GROVERcan turn out to be extremely irresponsible rather than defensive. A user on Reddit says that “These techniques for detecting fake news are fundamentally misguided. You cannot just train a statistical model on a bunch of news messages and expect it to be useful in detecting fake news. The reason for this should be obvious: there is no real information about the label ('fake' vs 'real' news) encoded in the data. Whether or not a piece of news is fake or real depends on the state of the external world, which is simply not present in the data. The label is practically independent of the data.” Another user on Hacker News comments that “Generative neural networks these days are both fascinating and depressing - feels like we're finally tapping into how subsets of human thinking & creativity work. But that knocks us off our pedestal, and threatens to make even the creative tasks we thought were strictly a human specialty irrelevant; I know we're a long way off from generalized AI, but we seem to be making rapid progress, and I'm not sure society's mature enough or ready for it. Especially if the cutting edge tools are in the service of AdTech and such, endlessly optimizing how to absorb everybody's spare attention. Perhaps there's some bright future where we all just relax and computers and robots take care of everything for us, but can't help feeling like some part of the human spirit is dying.” Few users feel that this ‘generating and detecting its own fake news’, kind of model is going to be unnecessary in the future. It’s just a matter of time that the text written by algorithms will be exactly similar to a human written text. At that point, there will be no way to distinguish between such articles. A user suggests that “I think to combat fake news, especially algorithmic one, we'll need to innovate around authentication mechanism that can effectively prove who you are and how much effort you put into writing something. Digital signatures or things like that.” For more details about the GROVER model, head over to the research paper. Worried about Deepfakes? Check out the new algorithm that manipulate talking-head videos by altering the transcripts Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? OpenAI researchers have developed Sparse Transformers, a neural network which can predict what comes next in a sequence
Read more
  • 0
  • 0
  • 23797

article-image-winnti-malware-chinese-hacker-group-attacks-major-german-corporations-for-years
Fatema Patrawala
26 Jul 2019
9 min read
Save for later

Winnti Malware: Chinese hacker group attacks major German corporations for years, German public media investigation reveals

Fatema Patrawala
26 Jul 2019
9 min read
German public broadcasters, Bavarian Radio & Television Network (BR) and Norddeutscher Rundfunk (NDR), have published a joint investigation report on a hacker group spying on certain businesses since years. Security researchers, Hakan Tanriverdi, Svea Eckert, Jan Strozyk, Maximilian Zierer and Rebecca Ciesielski have contributed to this report. They shed light on how this group of hackers operate and how widespread they are. The investigation started with one of the reporters receiving this code daa0 c7cb f4f0 fbcf d6d1 which eventually led to the team discovering a hacking group with Chinese origins operating on Winnti Malware. BR and NDR reporters, in collaboration with several IT security experts, have analyzed the Winnti malware. Moritz Contag of Ruhr University Bochum extracted information from different varieties of the malware and wrote a script for this analysis. Silas Cutler, an IT security expert with US-based Chronicle Security, confirmed it. The report analyses cases from the below listed targeted companies: Gaming: Gameforge, Valve Software: Teamviewer Technology: Siemens, Sumitomo, Thyssenkrupp Pharma: Bayer, Roche Chemical: BASF, Covestro, Shin-Etsu Hakan Tanriverdi one of the reporters wrote on Twitter, “We looked at more than 250 samples, wrote Yara rules, conducted nmap scans.” Yara rules is a tool primarily used in malware research and detection. Nmap is a free and open source network scanner used to discover hosts and services on a computer network. Additionally in the report, the team has presented ways to find out if one is infected by the Winnti malware. To learn about these methods in detail, check out the research report. Winnti malware is complex, created by “digital mercenaries” of Chinese origin Winnti is a highly complex structure that is difficult to penetrate. The term denotes both a sophisticated malware and an actual group of hackers. IT security experts like to call them digital mercenaries. According to a Kasperky Lab research held in 2011, the Winnti group has been active for several years and in their initial days, specialized in cyber-attacks against the online video game industry. However, according to this investigation the hacker group has now honed in on Germany and its blue-chip DAX corporations. BR and NDR reporters analyzed hundreds of malware versions used for unsavory purposes. They found that the hacker group has targeted at least six DAX corporations and stock-listed top companies of the German industry. In October 2016, several DAX corporations, including BASF and Bayer, founded the German Cyber Security Organization (DCSO). The job of DCSO’s IT security experts was to observe and recognize hacker groups like Winnti and to get to the bottom of their motives. In Winnti’s case, DCSO speaks of a “mercenary force” which is said to be closely linked with the Chinese government. The reporters of this investigation also interviewed few company staff, IT security experts, government officials, and representatives of security authorities. An IT security expert who has been analyzing the attacks for years said, “Any DAX corporation that hasn’t been attacked by Winnti must have done something wrong.” A high-ranking German official said to the reporters, “The numbers of cases are mind-boggling.” And claims that the group continues to be highly active—to this very day. Winnti hackers are audacious and “don’t care if they’re found out” The report points out that the hackers choose convenience over anonymity. Working with Moritz Contag the reporters found that the hackers wrote the names of the companies they want to spy on directly into their malware. Contag has analyzed more than 250 variations of the Winnti malware and found them to contain the names of global corporations. According to reporters, hackers usually take precautions, which experts refer to as Opsec. But the Winnti group’s Opsec was dismal to say the least. Somebody who has been keeping an eye on Chinese hackers on behalf of a European intelligence service believes that they didn’t really care: “These hackers don’t care if they’re found out or not. They care only about achieving their goals." The reporters believed that every hacking operation leaves digital traces. They also believe that if you notice hackers carefully, each and every step can be logged. To decipher the traces of the Winnti hackers, they took a closer look at the program code of the malware itself. They used a malware research engine known as “VirusTotal” created by Google. The hacker group initially attacked the gaming industry for financial gain In the early days, the Winnti group of hackers were mainly interested in money making. Their initial target was Gameforge, a gaming company based in the German town of Karlsruhe. In 2011, an email message found its way into Gameforge’s mailbox. A staff member opened the attached file and unaware to him started the Winnti program. Shortly afterwards, the administrators became aware that someone was accessing Gameforge’s databases and raising the account balance. Gameforge decided to implement Kaspersky antivirus software and  arranged for Kaspersky's IT security experts to visit the office.The security experts found suspicious files and analyzed them. They noticed that the system had been infiltrated by hackers acting like Gameforge’s administrators. It turned out that the hackers had taken over a total of 40 servers. “They are a very, very persistente group,” says Costin Raiu, who has been watching Winnti since 2011 and was in charge of Kaspersky’s malware analysis team. “Once the Winnti hackers are inside a network, they take their sweet time to really get a feel for the infrastructure,” he says. The hackers will map a company’s network and look for strategically favorable locations for placing their malware. They keep tabs on which programs are used in a company and then exchange a file in one of these programs. The modified file looks like the original, but was secretly supplemented by a few extra lines of code. Thereafter the manipulated file does the attackers’ bidding. Raiu and his team have been following the digital tracks left behind by some of the Winnti hackers. “Nine years ago, things were much more clear-cut. There was a single team, which developed and used Winnti. It now looks like there is at least a second group that also uses Winnti.” This view is shared by many IT security companies. And it is this second group which is getting the German security authorities worried. One government official says, “Winnti is very specific to Germany. It is the attacker group that's being encountered most frequently." Second group of Winnti hackers focused on industrial espionage The report says that by 2014, the Winnti malware code was no longer limited to game manufacturers. The second group’s job was mainly industrial espionage. Hackers targeted high-tech companies as well as chemical and pharmaceutical companies. They also attacked companies in Japan, France, the U.S. and Germany. The report sheds light on how Winnti hackers broke into Henkel’s network in 2014. The reporters present three files containing the website belonging to Henkel and the name of the hacked server. For example, one starts with the letter sequence DEDUSSV. They realized that server names can be arbitrary, but it is highly probable that DE stands for Germany and DUS for Düsseldorf, where the Henkel headquarters are located. The hackers were able to monitor all activities running on the web server and reached systems which didn't have direct internet access: The company also confirmed the Winnti incident and issued the following statement: “The cyberattack was discovered in the summer of 2014 and Henkel promptly took all necessary precautions.” Henkel claims that a “very small portion” of its worldwide IT systems had been affected— the systems in Germany. According to Henkel, there was no evidence suggesting that any sensitive data had been diverted. Other than Henkel, Winnti also targeted companies like Covestro, manufacturers of adhesives, lacquers and paints, Japan’s biggest chemical company, Shin-Etsu Chemical, Roche, one of the largest pharmaceutical companies in the world. Winnti hackers also penetrated the BASF and Siemens networks. A BASF spokeswoman says that in July 2015, hackers had successfully overcome “the first levels” of defense. “When our experts discovered that the attacker was attempting to get around the next level of defense, the attacker was removed promptly and in a coordinated manner from BASF’s network.” She added that no business relevant information had been lost at any time. According to Siemens, they were penetrated by the hackers in June 2016. “We quickly discovered and thwarted the attack,” Siemens spokesperson said. Winnti hackers also involved in political espionage The hacker group also is interested in penetrating political groups and there were several such indicators according to the report. The Hong Kong government was spied on by the Winnti hackers. The reporters found four infected systems with the help of the nmap network scan, and proceeded to inform the government by email. The reporters also found out a telecommunications provider from India had been infiltrated, the company happens to be located in the region where the Tibetan government has its headquarters. Incidentally, the relevant identifier in the malware is called “CTA.” A file which ended up on VirusTotal in 2018 contains a straightforward keyword: “tibet”. Other than this the report also throws light on attacks which were not directly related to political espionage but had connection among them. For example, the team found out Marriott hotels in USA was attacked by hackers. The Indonesian airline Lion Air networks were also penetrated by them. They wanted to get to the data of where people travel and where they were located, at any given time. The team confirmed this by showing the relevant coded files in the report. To read the full research report, check out the official German broadcsaster’s website. Hackers steal bitcoins worth $41M from Binance exchange in a single go! VLC media player affected by a major vulnerability in a 3rd library, libebml; updating to the latest version may help An IoT worm Silex, developed by a 14 year old resulted in malware attack and taking down 2000 devices
Read more
  • 0
  • 0
  • 23769
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-the-new-tech-worker-movement-how-did-we-get-here-and-what-comes-next
Bhagyashree R
28 Jan 2019
8 min read
Save for later

The new tech worker movement: How did we get here? And what comes next?

Bhagyashree R
28 Jan 2019
8 min read
Earlier this month, Logic Magazine, a print magazine about technology, hosted a discussion about the past, present, and future of the tech worker movement. This event was co-sponsored by solidarity groups like the Tech Worker Coalition, Coworker.org, NYC-DSA Tech Action Working Group, and Science for the people. Among the panelists were Joan Greenbaum, who was involved in organizing tech workers in the mainframe era and was part of Computer People for Peace. Meredith Whittaker is a research scientist at New York University and co-founder of the AI Now Institute, Google Open Research group, and one of the organizers of Google Walkout. Liz Fong-Jones, the Developer Advocate at Google Cloud Platform was also present, who recently tweeted that she will be leaving the company in February, because of Google’s lack of leadership in response to the demands made by employees during the Google walkout in November 2018. Also in the attendance were Emma Quail representing Unite Here and Patricia Rosa, a Facebook food service worker, who was inspired to fight for the union after watching a pregnant friend lose her job because she took one day off for a doctor’s appointment. The discussion was held in New York, hosted by Ben Tarnoff, the co-founder of Logic Magazine. It lasted for almost an hour, after which the Q&A session started. You can see the full discussion at Logic’s Facebook page. The rise of tech workers organizing In recent years, we have seen tech workers coming together to stand against any unjust decision taken by their companies. We saw tech workers at companies like Google, Amazon, and Microsoft raising their voices against contracts, with government agencies like ICE and Pentagon, which are just “profit-oriented” and can prove harmful to humanity. For instance, there was a huge controversy around Google’s Project Maven, which was focused on analyzing drone footage and could have been eventually used to improve drone strikes on the battlefield. More than 3,000 Google employees signed a petition against this project that led to Google deciding not to renew its contract with the U.S. Department of Defense in 2019. In December 2018, Google workers launched an industry-wide effort focusing on the end of forced arbitration, which affects at least 60 million workers in the US alone. In June, Amazon employees demanded Jeff Bezos to stop selling Rekognition, Amazon's facial recognition technology, to law enforcement agencies and to discontinue partnerships with companies that work with U.S. Immigration and Customs Enforcement (ICE). We also saw workers organizing campaigns demanding safer workplaces, free from sexual harassment and gender discrimination, better working conditions, retirement plans, professionalism standards, and fairness in equity compensation. In November, there was a massive Google Walkout with 20,000 Google employees from all over the world to protest against how Google handled sexual harassment cases. This backlash was triggered when it came into light that Google paid millions of dollars as exit packaged to its male executives who were accused of sexual misconduct. Let’s look at some of the highlights from this discussion: What do these issues ranging from controversial contracts, workplace issues, better benefits, a safe equitable workplace have to do with one another? Most companies today are motivated by profits they make, which also shows in the technology they produce. These technologies benefit a small fraction of users while affecting a larger predictable demographic of people, for instance, black and brown people. Meredith Whittaker remarks, “These companies are acting like parallel states right now.” The technologies that they produce have a significant impact over a number of domains that we are not even aware of. Liz Fong-Jones feels that it is also about us as tech workers taking responsibility for what we build. We are feeding into the profit motive these companies have if we keep participating in building systems that can have bad implications for users or not speaking up for the workers working alongside us. To hold these companies accountable and to ensure that all workers are being used for good and people are treated fairly, we all need to come together no matter in what part of the company we are working in. Joan Greenbaum also believes that these types of movement cannot be successful without forming alliances. Any alliance work between tech workers and different roles? Emma Quail shared that there have been many collaborations between engineers, tech employees, cafeteria workers, and other service workers in the fights against companies treating their employees differently. These collaborations are important as tech workers and engineers are much more privileged in these companies. “They have more voice, their job is taken more seriously,” said Emma Quail. Patricia Rosa sharing her experience said, “When some of the tech workers came to one of our negotiations and spoke on our behalf, the company got nervous, and they finally gave them the contract.” Liz Fong-Jones mentions that the main challenge to eliminate this discrimination is that employers want to keep their workers separate. As an example to this, she added, “Google prohibits its cafeteria workers from being on campus when they are not on shift, it prohibits them from holding leadership positions and employee resource groups.” These companies resort to these policies because they do not want their “valuable employees” to find out about the working conditions of other workers. In the last few years, the tech worker movement saw a huge boost in catching the attention of society, but this did not happen overnight. How did we get to this moment? Liz Fong-Jones attributes the Me Too movement as one of the turning points. This movement made workers realize that they are not alone and there are people who share the same concerns. Another thing that Liz Fong-Jones thinks led us to this movement was, management coming with proposals that can have negative implications on people and asked employees to keep secrets. But now tech workers are more informed about what exactly they are building. In the last few years,  tech companies have come under the attention and scrutiny of the public because of the many tech scandals whether it is related to data, software, or workplace, rights. One of the root cause of this was an endless growth requirement. Meredith Whittaker shares, “Over the last few years, we saw series of relentless embarrassing answers to substantially serious questions. They cannot keep going like this.” What’s in the future? Joan Greenbaum rightly mentions that tech companies should actually, “look to work with people what the industry calls users.” They should adopt participatory design instead of user-centered design. Participatory design is basically an approach in which all stakeholders, from employees, partners to local business owners, customers are involved in the design process. Meredith Whittaker remarks, “The people who are getting harmed by these technologies are not the people who are going to get a paycheck from these companies. They are not going to check tech power or tech culture unless we learn how to know each other and form alliances that also connect corporate.” Once we all come together and form alliances, we will be able to pinpoint these companies about the updates and products these companies are building to know about their implications. So, the future basically is in doing our homework, knowing how these companies work, building relationships and coming together against any unjust decisions by these companies. Liz Fong-Jones adds, “The Google Walkout was just the beginning. The labor movement will spread into other companies and also having more visible effects beyond a walkout.” Emma Quail believes that companies will need to address issues related to housing, immigration, rights for people. Patricia Rosa shared that for the future we need to work towards spreading awareness among other workers that there are people who care about their rights and how they are being treated at the workplace. If they are aware that there are people to support them they will not be scared to speak up as Patricia was when she started her journey. Some of the questions asked in the Q&A session were: What's different politically about tech than any other industry? How was the Google walkout organized?  I was a tech contractor and didn't hear about it until it happened. Are there any possibilities of creating a single union of all tech workers no matter what their roles are? Is that a desirable far goal? How tech workers working in one state can relate to the workers working internationally? Watch the full discussion at Logic’s Facebook page. Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley Sally Hubbard on why tech monopolies are bad for everyone: Amazon, Google, and Facebook in focus How far will Facebook go to fix what it broke: Democracy, Trust, Reality
Read more
  • 0
  • 0
  • 23755

article-image-4-transformations-ai-powered-ecommerce
Savia Lobo
23 Nov 2017
5 min read
Save for later

Through the customer's eyes: 4 ways Artificial Intelligence is transforming ecommerce

Savia Lobo
23 Nov 2017
5 min read
We have come a long way from what ecommerce looked like two decades ago. From a non-existent entity, it has grown into a world-devouring business model that is a real threat to the traditional retail industry. It has moved from a basic static web page with limited product listings to a full grown virtual marketplace where anyone can buy or sell anything from anywhere at anytime at the click of a button. At the heart of this transformation are two things: customer experience and technology. This is what Jeff Bezos, founder & CEO of Amazon, one of the world’s largest ecommerce sites believes: “We see our customers as invited guests to a party, and we are the hosts. It's our job every day to make every important aspect of the customer experience a little bit better.” Now with the advent of AI, the retail space especially e-commerce is undergoing another major transformation that will redefine customer experiences and thereby once again change the dynamics of the industry. So, how is AI-powered ecommerce actually changing the way shoppers shop? AI-powered ecommerce makes search easy, accessible and intuitive Looking for something? Type it! Say it!...Searching for a product you can’t name? No worries. Just show a picture. "A lot of the future of search is going to be about pictures instead of keywords." - Ben Silbermann, CEO of Pinterest We take that statement with a pinch of salt. But we are reasonably confident that a lot of product search is going to be non-text based. Though text searches are common, voice and image searches in e-commerce are now gaining traction. AI makes it possible for the customer to move beyond simple text-based product search and search more easily and intuitively through voice and visual product searches. This also makes search more accessible. It uses Natural Language Processing to understand the customer’s natural language, be it in text or speech to provide more relevant search results. Visual product searches are made possible through a combination of computer vision, image recognition, and reverse image search algorithms.   Amazon Echo, a home-automated speaker has a voice assistant Alexa that helps customers to buy products online by having simple conversations with Alexa. Slyce, uses a visual search feature, wherein the customer can scan a barcode, a catalog, and even a real image; just like Amazon’s in-app visual feature. Clarifai helps developers to build applications that detect images and videos and searches related content. AI-powered ecommerce makes personalized product recommendations   When you search for a product, the AI underneath recommends further options based on your search history or depending on what other users who have similar tastes found interesting. Recommendations engines employ one or a combination of the three types of recommendation algorithms: content-based filtering, collaborative filtering, and complementary products. The relevance and accuracy of the results produced depend on various factors such as the type of recommendation engine used, the quantity and quality of data used to train the system, the data storage and retrieval strategies used amongst others. For instance, Amazon uses DSSTNE (Deep Scalable Sparse Tensor Network Engine, pronounced as Destiny) to make customized product recommendations to their customers. The customer data collected and stored is used by DSSTNE to train and generate predictions for customers. The data processing itself takes place on CPU clusters whereas the training and predictions take place on GPUs to ensure speed and scalability. Virtual Assistants as your personal shopping assistants   Now, what if we said you can have all the benefits we have discussed above without having to do a lot of work yourself? In other words, what if you had a personal shopping assistant who knows your preferences, handles all the boring aspects of shopping (searching, comparing prices, going through customers reviews, tracking orders etc.) and brought you products that were just right with the best deals? Mona, one such personal shopper, can do all of the above and more. It uses a combination of artificial intelligence and big data to do this. Virtual assistants can either be fully AI driven or a combination of AI-human collaboration. Chatbots also assist shoppers but within a more limited scope. They can help resolve customer queries with zero downtime and also assist in simple tasks such as notify the customer of price changes, place and track orders etc. Dominos has a facebook messenger Bot that enables customers to order food. Metail, an AI-powered ecommerce website, take in your body measurements. With this, you can actually see how a clothing would look on you. Botpress helps developers to build their own chatbots consuming lesser time. Maximizing CLV (customer lifetime value) with AI-powered CRM AI-powered ecommerce in CRM aims to help businesses predict CLV and sell the right product to the right customer at the right time, every time leveraging the machine learning and predictive capabilities of AI. It also helps businesses provide the right level of customer service and engagement. In other words, by combining the predictive capabilities and automated 1-1 personalization, an AI backed CRM can maximize CLV for every customer!    Salesforce Einstein, IBM Watson are some of the frontrunners in this space. IBM Watson, with its cognitive touch, helps ecommerce sites analyze their mountain of customer data and glean useful insights to predict a lot of things like what customers are looking for, the brands that are popular, and so on.  It can also help with dynamic pricing of products by predicting when to discount and when to increase the price based on analyzing demand and competitions’ pricing tactics. It is clear that AI not only has the potential to transform e-commerce as we know it but that it has already become central to the way leading ecommerce platforms such as Amazon are functioning. Intelligent e-commerce is here and now. The near future of ecommerce is omnicommerce driven by the marriage between AI and robotics to usher in the ultimate customer experience - one that is beyond our current imagination.
Read more
  • 0
  • 0
  • 23729

article-image-what-are-apis-why-should-businesses-invest-in-api-development
Packt Editorial Staff
25 Jul 2019
9 min read
Save for later

What are APIs? Why should businesses invest in API development?

Packt Editorial Staff
25 Jul 2019
9 min read
Application Programming Interfaces (APIs) are like doors that provide access to information and functionality to other systems and applications. APIs share many of the same characteristics as doors; for example, they can be as secure and closely monitored as required. APIs can add value to a business by allowing the business to monetize information assets, comply with new regulations, and also enable innovation by simply providing access to business capabilities previously locked in old systems. This article is an excerpt from the book Enterprise API Management written by Luis Weir. This book explores the architectural decisions, implementation patterns, and management practices for successful enterprise APIs. In this article, we’ll define the concept of APIs and see what value APIs can add to a business. APIs, however, are not new. In fact, the concept goes way back in time and has been present since the early days of distributed computing. However, the term as we know it today refers to a much more modern type of APIs, known as REST or web APIs. The concept of APIs Modern APIs started to gain real popularity when, in the same year of their inception, eBay launched its first public API as part of its eBay Developers Program. eBay's view was that by making the most of its website functionality and information also accessible via a public API, it would not only attract, but also encourage communities of developers worldwide to innovate by creating solutions using the API. From a business perspective, this meant that eBay became a platform for developers to innovate on and, in turn, eBay would benefit from having new users that perhaps it couldn't have reached before. eBay was not wrong. In the years that followed, thousands of organizations worldwide, including known brands, such as Salesforce.com, Google, Twitter, Facebook, Amazon, Netflix, and many others, adopted similar strategies. In fact, according to the programmableweb.com (a well-known public API catalogue), the number of publicly available APIs has been growing exponentially, reaching over 20k as of August 2018. Figure 1: Public APIs as listed in programmableweb.com in August 2018 It may not sound like much, but considering that each of the listed APIs represents a door to an organization's digital offerings, we're talking about thousands of organizations worldwide that have already opened their doors to new digital ecosystems, where APIs have become the product these organizations sell and developers have become the buyers of them. Figure: Digital ecosystems enabled by APIs In such digital ecosystems, communities of internal, partner, or external developers can rapidly innovate by simply consuming these APIs to do all sorts of things: from offering hotel/flight booking services by using the Expedia API, to providing educational solutions that make sense of the space data available through the NASA API. There are ecosystems where business partners can easily engage in business-to-business transactions, either to resell goods or purchase them, electronically and without having to spend on Electronic Data Interchange (EDI) infrastructure. Ecosystems where an organization's internal digital teams can easily innovate as key enterprise information assets are already accessible. So, why should businesses care about all this? There is, in fact, not one answer but multiple, as described in the subsequent sections. APIs as enablers for innovation and bimodal IT What is innovation? According to a common definition, innovation is the process of translating an idea or invention into a good or service that creates value or for which customers will pay. In the context of businesses, according to an article by HBR, innovation manifests itself in two ways: Disruptive innovation: Described as the process whereby a smaller company with fewer resources is able to successfully challenge established incumbent businesses. Sustaining innovation: When established businesses (incumbents) improve their goods and services in the eyes of existing customers. These improvements can be incremental advances or major breakthroughs, but they all enable firms to sell more products to their most profitable customers. Why is this relevant? It is well known that established businesses struggle with disruptive innovation. The Netflix vs Blockbuster example reminds us of this fact. By the time disruptors are able to catch up with an incumbent's portfolio of goods and services, they are able to do so with lower prices, better business models, lower operation costs, and far more agility, and speed to introduce new or enhanced features. At this point, sustaining innovation is not good enough to respond to the challenge. With all the recent advances in technology and the internet, the rate at which disruptive innovation is challenging incumbents has only grown exponentially. Therefore, in order for established businesses to endure the challenge put upon them, they must somehow also become disruptors. The same HBR article describes a point of view on how to achieve this from a business standpoint. From a technology standpoint, however, unless the several systems that underpin a business are "enabled" to deliver such disruption, no matter what is done from a business standpoint, this exercise will likely fail. Perhaps by mere coincidence, or by true acknowledgment of the aforesaid, Gartner introduced the concept of bimodal IT in December 2013, and this concept is now mainstream. Gartner defined bimodal IT as the following: "The practice of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility. Mode 1 is traditional and sequential, emphasizing safety and accuracy. Mode 2 is exploratory and nonlinear, emphasizing agility and speed." Figure: Gartner's bimodal IT According to Gartner, Mode 1 (or slow) IT organizations focus on delivering core IT services on top of more traditional and hard-to-change systems of record, which in principle are changed and improved in longer cycles, and are usually managed with long-term waterfall project mechanisms. Whereas for Mode 2 (or fast) IT organizations, the main focus is to deliver agility and speed, and therefore they act more like a startup (or digital disruptor in HBR terms) inside the same enterprise. However, what is often misunderstood is how fast IT organizations can disruptively innovate, when most of the information assets, which are critical to bringing context to any innovation, reside in backend systems, and any sort of access has to be delivered by the slowest IT sibling. This dilemma means that the speed of innovation is constrained to the speed by which the relevant access to core information assets can be delivered. As the saying goes, "Where there's a will there's a way." APIs could be implemented as the means for the fast IT to access core information assets and functionality, without the intervention of the slow IT. By using APIs to decouple the fast IT from the slow IT, innovation can occur more easily. However, as with everything, it is easier said than done. In order to achieve such organizational decoupling using APIs, organizations should first build an understanding about what information assets and business capabilities are to be exposed as APIs, so fast IT can consume them as required. This understanding must also articulate the priorities of when different assets are required and by whom, so the creation of APIs can be properly planned for and delivered. Luckily for those organizations that already have mature service-oriented architectures (SOA), some of this work will probably already be in place. For organizations without such luck, this activity should be planned for and should be a fundamental component of the digital strategy. Then the remaining question would be: which team is responsible for defining and implementing such APIs; the fast IT or slow IT? Although the long answer to this question is addressed throughout the different chapters of this book, the short answer is neither and both. It requires a multi-disciplinary team of people, with the right technology capabilities available to them, so they can incrementally API-enable the existing technology landscape, based on business-driven priorities. APIs to monetize on information assets Many experts in the industry concur that an organization's most important asset is its information. In fact, a recent study by Massachusetts Institute of Technology (MIT) suggests that data is the single most important asset for organizations "Data is now a form of capital, on the same level as financial capital in terms of generating new digital products and services. This development has implications for every company's competitive strategy, as well as for the computing architecture that supports it." If APIs act as doors to such assets, then APIs also provide businesses with an opportunity to monetize them. In fact, some organizations are already doing so. According to another article by HBR, 50% of the revenue that Salesforce.com generates comes from APIs, while eBay generates about 60% of its revenue through its API. This is perhaps not such a huge surprise, given that both of these organizations were pioneers of the API economy. Figure: The API economy in numbers What's even more surprising is the case of Expedia. According to the same article, 90% of Expedia's revenue is generated via APIs. This is really interesting, as it basically means that Expedia's main business is to indirectly sell electronic travel services via its public API. However, it's not all that easy. According to the previous study by MIT, most of the CEOs for Fortune 500 companies don't yet fully acknowledge the value of APIs. An intrinsic reason for this could be the lack of understanding and visibility over how data is currently being (or not being) used. Assets that sit hidden on systems of record, only being accessed via traditional integration platforms, will not, in most cases, give insight to the business on how information is being used, and the business value it adds. APIs, on the other hand, are better suited to providing insight about how/by whom/when/why information is being accessed, therefore giving the business the ability to make better use of information to, for example, determine which assets have better capital potential. In this article we provided a short description of APIs, and how they act as an enabler to digital strategies. Define the right organisation model for business-driven APIs with Luis Weir’s upcoming release Enterprise API Management. To create effective API documentation, know how developers use it, says ACM GraphQL API is now generally available Next.js 9 releases with built in zero-config TypeScript support, automatic static optimization, API routes and more
Read more
  • 0
  • 0
  • 23649

article-image-mongodb-issues-you-should-pay-attention
Tess Hsu
21 Oct 2016
4 min read
Save for later

MongoDB: Issues You Should Pay Attention To

Tess Hsu
21 Oct 2016
4 min read
MongoDB, founded in 2007 with more than 15 million downloads, excels at supporting real-time analytics for big data applications. Rather than storing data in tables made out of individual rows, MongoDB stores it in collections made out of JSON documents. But, why use MongoDB? How does it work? What issues should you pay attention to? Let’s answer these questions in this post. MongoDB, a NoSQL database MongoDB is a NoSQL database, and NoSQL == Not Only SQL. The data structure is combined with keyvalue, like JSON. The data type is very flexible, but flexibility can be a problem if it’s not defined properly. Here are some good reasons you should use MongoDB: If you are a front-end developer, MongoDB is much easier to learn than mySQL, because the MongoDB base language is JavaScript and JSON. MongoDB works well for big data, because for instance, you can de-normalize and flatten 6 tables into just 2 tables. MongoDB is document-based. So it is good to use if you have a lot of single types of documents. So, now let’s examine how MongoDB works, starting with installing MongoDB: Download MongoDB from https://www.mongodb.com/download-center#community. De-zip your MongoDB file. Create a folder for the database, for example, Data/mydb. Open cmd to the MongoDB path, $ mongod –dbpath ../data/mydb. $ mongo , to make sure that it works. $ show dbs, and you can see two databases: admin and local. If you need to shut down the server, use $db.shutdownServer(). MongoDB basic usage Now that you have MongoDB on your system, let’s examine some basic usage of MongoDB, covering insertion of a document, removal of a document, and how to drop a collection from MongoDB. To insert a document, use the cmd call. Here we use employee as an example to insert a name, an account, and a country. You will see the data shown in JSON: To remove the document: db.collection.remove({ condition }), justOne) justOne: true | false, set to remove the first data, but if you want to remove them all, use db.employee.remove({}). To drop a collection (containing multiple documents) from the database, use: db.collection.drop() For more commands, please look at the MongoDB documentation. What to avoid Let’s examine some points that you should note when using MongoDB: Not easy to change to another database: When you choose MongoDB, it isn’t like other RDBMSes. It can be difficult to change, for example, from MongoDB to Couchbase. No support for ACID: ACID (Atomicity, Consistency, Isolation, Durability) is the basic item of transactions, but most NoSQL databases don’t guarantee ACID, so you need more technical skills in order to do this. No support for JOIN: Since the NoSQL database is non-relational, it does not support JOIN. Document limited: MongoDB uses stock data in documents, and these documents are in JSON format. Because of this, MongoDB has a limited data size, and the latest version supports up to 16 MB per document. Filter search has to correctly define lowercase/uppercase: For example: db.people.find({name: ‘Russell’}) and db.people.find({name: ‘ russell’}) are different. You can filter by regex, such as db.people.find({name:/Russell/i}), but this will affect performance. I hope this post has provided you with some important points about MongoDB which will help you decide if you have a big data solution that is a good fit for using this NoSQL database. About the author  Tess Hsu is a UI design and front-end programmer. He can be found on GitHub.
Read more
  • 0
  • 0
  • 23601
article-image-predictive-analytics-with-amazon-ml
Natasha Mathur
09 Aug 2018
9 min read
Save for later

Predictive Analytics with AWS: A quick look at Amazon ML

Natasha Mathur
09 Aug 2018
9 min read
As artificial intelligence and big data have become a ubiquitous part of our everyday lives, cloud-based machine learning services are part of a rising billion-dollar industry. Among the several services currently available in the market, Amazon Machine Learning stands out for its simplicity. In this article, we will look at Amazon Machine Learning, MLaaS, and other related concepts. This article is an excerpt taken from the book 'Effective Amazon Machine Learning' written by Alexis Perrier. Machine Learning as a Service Amazon Machine Learning is an online service by Amazon Web Services (AWS) that does supervised learning for predictive analytics. Launched in April 2015 at the AWS Summit, Amazon ML joins a growing list of cloud-based machine learning services, such as Microsoft Azure, Google prediction, IBM Watson, Prediction IO, BigML, and many others. These online machine learning services form an offer commonly referred to as Machine Learning as a Service or MLaaS following a similar denomination pattern of other cloud-based services such as SaaS, PaaS, and IaaS respectively for Software, Platform, or Infrastructure as a Service. Studies show that MLaaS is a potentially big business trend. ABI Research, a business intelligence consultancy, estimates machine learning-based data analytics tools and services revenues to hit nearly $20 billion in 2021 as MLaaS services take off as outlined in this business report  Eugenio Pasqua, a Research Analyst at ABI Research, said the following: "The emergence of the Machine-Learning-as-a-Service (MLaaS) model is good news for the market, as it cuts down the complexity and time required to implement machine learning and thus opens the doors to an increase in its adoption level, especially in the small-to-medium business sector." The increased accessibility is a direct result of using an API-based infrastructure to build machine-learning models instead of developing applications from scratch. Offering efficient predictive analytics models without the need to code, host, and maintain complex code bases lowers the bar and makes ML available to smaller businesses and institutions. Amazon ML takes this democratization approach further than the other actors in the field by significantly simplifying the predictive analytics process and its implementation. This simplification revolves around four design decisions that are embedded in the platform: A limited set of tasks: binary classification, multi-classification, and regression A single linear algorithm A limited choice of metrics to assess the quality of the prediction A simple set of tuning parameters for the underlying predictive algorithm That somewhat constrained environment is simple enough while addressing most predictive analytics problems relevant to business. It can be leveraged across an array of different industries and use cases. Let's see how! Leveraging full AWS integration The AWS data ecosystem of pipelines, storage, environments, and Artificial Intelligence (AI) is also a strong argument in favor of choosing Amazon ML as a business platform for its predictive analytics needs. Although Amazon ML is simple, the service evolves to greater complexity and more powerful features once it is integrated into a larger structure of AWS data related services. AWS is already a major factor in cloud computing. Here's what an excerpt from The Economist, August  2016 has to say about AWS (http://www.economist.com/news/business/21705849-how-open-source-software-and-cloud-computing-have-set-up-it-industry): AWS shows no sign of slowing its progress towards full dominance of cloud computing's wide skies. It has ten times as much computing capacity as the next 14 cloud providers combined, according to Gartner, a consulting firm. AWS's sales in the past quarter were about three times the size of its closest competitor, Microsoft's Azure. This gives an edge to Amazon ML, as many companies that are using cloud services are likely to be already using AWS. Adding simple and efficient machine learning tools to the product offering mix anticipates the rise of predictive analytics features as a standard component of web services. Seamless integration with other AWS services is a strong argument in favor of using Amazon ML despite its apparent simplicity. The following architecture is a case study taken from an AWS January 2016 white paper titled Big Data Analytics Options on AWS (http://d0.awsstatic.com/whitepapers/Big_Data_Analytics_Options_on_AWS.pdf), showing a potential AWS architecture for sentiment analysis on social media. It shows how Amazon ML can be part of a more complex architecture of AWS services: Comparing performances in Amazon ML services Keeping systems and applications simple is always difficult, but often worth it for the business. Examples abound with overloaded UIs bringing down the user experience, while products with simple, elegant interfaces and minimal features enjoy widespread popularity. The Keep It Simple mantra is even more difficult to adhere to in a context such as predictive analytics where performance is key. This is the challenge Amazon took on with its Amazon ML service. A typical predictive analytics project is a sequence of complex operations: getting the data, cleaning the data, selecting, optimizing and validating a model and finally making predictions. In the scripting approach, data scientists develop codebases using machine learning libraries such as the Python scikit-learn library or R packages to handle all these steps from data gathering to predictions in production. As a developer breaks down the necessary steps into modules for maintainability and testability, Amazon ML breaks down a predictive analytics project into different entities: datasource, model, evaluation, and predictions. It's the simplicity of each of these steps that makes AWS so powerful to implement successful predictive analytics projects. Engineering data versus model variety Having a large choice of algorithms for your predictions is always a good thing, but at the end of the day, domain knowledge and the ability to extract meaningful features from clean data is often what wins the game. Kaggle is a well-known platform for predictive analytics competitions, where the best data scientists across the world compete to make predictions on complex datasets. In these predictive competitions, gaining a few decimals on your prediction score is what makes the difference between earning the prize or being just an extra line on the public leaderboard among thousands of other competitors. One thing Kagglers quickly learn is that choosing and tuning the model is only half the battle. Feature extraction or how to extract relevant predictors from the dataset is often the key to winning the competition. In real life, when working on business-related problems, the quality of the data processing phase and the ability to extract meaningful signal out of raw data is the most important and time-consuming part of building an effective predictive model. It is well known that "data preparation accounts for about 80% of the work of data scientists" (http://www.forbes.com/sites/gilpress/2016/03/23/data-preparation-most-time-consuming-least-enjoyable-data-science-task-survey-says/). Model selection and algorithm optimization remains an important part of the work but is often not the deciding factor when the implementation is concerned. A solid and robust implementation that is easy to maintain and connects to your ecosystem seamlessly is often preferred to an overly complex model developed and coded in-house, especially when the scripted model only produces small gains when compared to a service-based implementation. Amazon's expertise and the gradient descent algorithm Amazon has been using machine learning for the retail side of its business and has built a serious expertise in predictive analytics. This expertise translates into the choice of algorithm powering the Amazon ML service. The Stochastic Gradient Descent (SGD) algorithm is the algorithm powering Amazon ML linear models and is ultimately responsible for the accuracy of the predictions generated by the service. The SGD algorithm is one of the most robust, resilient, and optimized algorithms. It has been used in many diverse environments, from signal processing to deep learning and for a wide variety of problems, since the 1960s with great success. The SGD has also given rise to many highly efficient variants adapted to a wide variety of data contexts. We will come back to this important algorithm in a later chapter; suffice it to say at this point that the SGD algorithm is the Swiss army knife of all possible predictive analytics algorithm. Several benchmarks and tests of the Amazon ML service can be found across the web (Amazon, Google, and Azure: https://blog.onliquid.com/machine-learning-services-2/ and Amazon versus scikit-learn: http://lenguyenthedat.com/minimal-data-science-2-avazu/). Overall results show that the Amazon ML performance is on a par with other MLaaS platforms, but also with scripted solutions based on popular machine learning libraries such as scikit-learn. For a given problem in a specific context and with an available dataset and a particular choice of a scoring metric, it is probably possible to code a predictive model using an adequate library and obtain better performances than the ones obtained with Amazon ML. But what Amazon ML offers is stability, an absence of coding, and a very solid benchmark record, as well as a seamless integration with the Amazon Web Services ecosystem that already powers a large portion of the Internet. Amazon ML service pricing strategy As with other MLaaS providers and AWS services, Amazon ML only charges for what you consume. The cost is broken down into the following: An hourly rate for the computing time used to build predictive models A prediction fee per thousand prediction samples And in the context of real-time (streaming) predictions, a fee based on the memory allocated upfront for the model The computational time increases as a function of the following: The complexity of the model The size of the input data The number of attributes The number and types of transformations applied At the time of writing, these charges are as follows: $0.42 per hour for data analysis and model building fees $0.10 per 1,000 predictions for batch predictions $0.0001 per prediction for real-time predictions $0.001 per hour for each 10 MB of memory provisioned for your model These prices do not include fees related to the data storage (S3, Redshift, or RDS), which are charged separately. During the creation of your model, Amazon ML gives you a cost estimation based on the data source that has been selected. The Amazon ML service is not part of the AWS free tier, a 12-month offer applicable to certain AWS services for free under certain conditions. To summarize, we presented a simple introduction to the Amazon ML service. Amazon ML is built on a solid ground, with a simple yet very efficient algorithm driving its predictions. If you found this post useful, be sure to check out the book  'Effective Amazon Machine Learning' to learn about predictive analytics and other concepts in AWS machine learning. Integrate applications with AWS services: Amazon DynamoDB & Amazon Kinesis [Tutorial] AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developers AWS Elastic Load Balancing: support added for Redirects and Fixed Responses in Application Load Balancer
Read more
  • 0
  • 0
  • 23599

article-image-time-for-data-privacy-duckduckgo-ceo-gabe-weinberg-in-an-interview-with-kara-swisher
Vincy Davis
28 May 2019
8 min read
Save for later

Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher

Vincy Davis
28 May 2019
8 min read
On the latest Recode Decode episode, Kara Swisher (co-founder) interviewed DuckDuckGo CEO, Gabriel Weinberg on data tracking and why it’s time for Congress to act now as federal legislation is necessary in the current scenario of constant surveillance. DuckDuckGo is an Internet search engine that emphasizes on protecting searchers' privacy. Its market share in the U.S. is about 1%, as compared to more than 88% share owned by Google. Given below are some of the key highlights of the interview. On how DuckDuckGo is different from Google DuckDuckGo which is a internet privacy company, helps users’ to “escape the creepiness and tracking on the internet”. DuckDuckGo has been an alternative to Google since 11 years. It has about a billion searches a month and is the fourth-largest search engine in the U.S. Weinberg states that “Google and Facebook are the largest traders of trackers”, and claims that his company blocks trackers from hundreds of companies. DuckDuckGo also enables more encryption as they force users to go to the unencrypted version of a website. This prevents Internet Service Providers(ISPs)  from tracking the user. When asked the reason for settling into the ‘search business’, Weinberg replied that being from a tech background (tech policy from MIT), he has always been interested in search. After developing this business, he got many privacy queries. It's then that he realized that, “One, searches are essentially the most private thing on the internet. You just type in all your deepest, darkest secrets and search, right? The second thing is, you don’t need to actually track people to make money on search,” so he realized that this would be a “better user experience, and just made the decision not to track people.” Read More: DuckDuckGo chooses to improve its products without sacrificing user privacy The switch from contextual advertising to behavioral advertising From the time internet started working till mid-2000s, the kind of advertising used is called as contextual advertising. It had a very simple routine, “sites used to sell their own ads, they would put advertising based on the content of the article”. Post mid-2000, the working shifted to behavioral advertising. It includes the “creepy ads, the ones that kind of follow you around the internet.” Weinberg added that when website publishers in the Google Network of content sites used to sell their biggest inventory, banner advertising was done at the top of the page. To explore more money, the bottom of the pages was sold to ad networks, to target the site content and audience. These advertisements are administered, sorted, and maintained by Google, under the name AdSense. This helped Google to get all the behavioral data. So if a user searched for something, Google can follow them around with that search. As these advertisements became more lucrative, publishers ceded most of their page over to this behavioral advertising. There has been “no real regulation in tech” to prevent this. Through these trackers, companies like Google and Facebook and many others get user information and browsing history, including purchase history, location history, browsing history, search history, and even user location. Read More: Ireland’s Data Protection Commission initiates an inquiry into Google’s online Ad Exchange services Read More: Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices Weinberg informs that, “when you go to, now, a website that has advertising from one of these networks, there’s a real-time bidding against you, as a person. There’s an auction to sell you an ad based on all this creepy information you didn’t even realize people captured” People do ‘care about privacy’ Weinberg says that “before you knew about it, you were okay with it because you didn’t realize it was so invasive, but after Cambridge Analytica and all the stories about the tracking, that number just keeps going up and up and up.” He also explained about the setting “do not track”, which is available in most of the privacy settings of the browser. He says “People are like, ‘No one ever goes into settings and looks at privacy.’ That’s not true. Literally, tens of millions of Americans have gone into their browser settings and checked this thing. So, people do care!”. Weinberg believes ‘do not track’ is a better mechanism for privacy laws, because once the user makes the setting, no more popups will be allowed i.e., no more sites can track you. He also hopes that the ‘do not track’ mechanism is passed by Congress as it will allow all the people in the country to not being tracked. On challenging Google One main issue faced by DuckDuckGo is that not many people are aware of it. Weinberg says, “There’s 20 percent of people that we think would be interested in switching to DuckDuckGo, but it’s hard to convey all these privacy concepts.” He also claimed that companies like Google are altering people’s searches through ‘filter bubble’. As an example, he added, “when you search, you expect to get the results right? But we found that it varies a lot by location”. Last year, DuckDuckGo had accused Google, that their search personalization contributes to “filter bubbles”. In 2012, DuckDuckGo ran a study showing Google's filter bubble may have significantly influenced the 2012 U.S. Presidential election by inserting tens of millions of more links for Obama than for Romney in the run-up to that election. Read More: DeepMind researchers provide theoretical analysis on recommender system, ‘echo chamber’ and ‘filter bubble effect’ How to prevent online tracking Other than using DuckDuckGo and not using say, any of Google’s internet home devices, Swisher asked Weinberg, what are other ways to protect ourselves from being tracked online. To this, Weinberg says there are plenty of other options available. He suggested, “For Google, there are actually alternatives in every category.” For emails, he suggested ProtonMail, FastMail as options. When asked about Facebook, he admitted that “there aren’t great alternatives to it” and added cheekily, “Just leave it”. He further added that there are a bunch of privacy settings available in the devices themselves. He also mentioned about DuckDuckGo blog spreadprivacy.com which provides advice tips. Also there are things which users can do, like turning off ad tracking in the device or to use an end-to-end encryption. On Facial recognition system Weinberg says “Facial recognition is hard”. A person can wear any minor thing to avoid getting caught on the camera. He admits, “you’re going to need laws” to regulate the use of it and thinks San Francisco started a great trend in banning the technology. Many other points were also discussed by Swisher and Weinberg, which included the Communications Decency Act 230 to control sensitive data on the internet. Weinberg also asserted that there’s a need for a national bill like GDPR in the U.S. There were also questions raised on Amazon’s growing advertisements through Google and Facebook. Weinberg also dismissed the probability of having a DuckDuckGo for YouTube anytime soon. Many users agree with Gabriel Weinberg that we should opt into data tracking and it is time to make ‘Do not track’ the norm. A user on Hacker News commented, “Discounting Internet by axing privacy is a nasty idea. Privacy should be available by default without any added price tags.” Another user added, “In addition to not stalking you across the web, DDG also does not store data on you even when using their products directly. For me that is still cause for my use of DDG.” However, as mentioned by Weinberg, there are still people who do not mind being tracked online. It can be because they are not aware of the big trades that takes place behind a user’s one click. A user on Reddit has given an apt basis for this,  “Privacy matters to people at home, but not online, for some reason. I think because it hasn't been transparent, and isn't as obvious as a person looking in your windows. That slowly seems to be changing as more of these concerns are making the news, more breaches, more scandals. You can argue the internet is "wandering outside", which is true to some degree, but it doesn't feel that way. It feels private, just you and your computer/phone, but it's not. What we experience is not matching up with reality. That is what's dangerous/insidious about the whole thing. People should be able to choose when to make themselves "public", and you largely can't because it's complicated and obfuscated.” For more details about their conversation, check out the full interview. Speech2Face: A neural network that “imagines” faces from hearing voices. Is it too soon to worry about ethnic profiling? ‘Facial Recognition technology is faulty, racist, biased, abusive to civil rights; act now to restrict misuse’ say experts to House Oversight and Reform Committee GDPR complaint in EU claim billions of personal data leaked via online advertising bids
Read more
  • 0
  • 0
  • 23506

article-image-teaching-ai-ethics-trick-or-treat
Natasha Mathur
31 Oct 2018
5 min read
Save for later

Teaching AI ethics - Trick or Treat?

Natasha Mathur
31 Oct 2018
5 min read
The Public Voice Coalition announced Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018, last week. “The rise of AI decision-making also implicates fundamental rights of fairness, accountability, and transparency. Modern data analysis produces significant outcomes that have real-life consequences for people in employment, housing, credit, commerce, and criminal sentencing. Many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them. We propose these Universal Guidelines to inform and improve the design and use of AI”, reads the EPIC’s guideline page. Artificial Intelligence ethics aim to improve the design and use of AI, as well as to minimize the risk for society, as well as ensures the protection of human rights. AI ethics focuses on values such as transparency, fairness, reliability, validity, accountability, accuracy, and public safety. Why teach AI ethics? Without AI ethics, the wonders of AI can convert into the dangers of AI, posing strong threats to society and even human lives. One such example is when earlier this year, an autonomous Uber car, a 2017 Volvo SUV traveling at roughly 40 miles an hour, killed a woman in the street in Arizona. This incident brings out the challenges and nuances of building an AI system with the right set of values embedded in them. As different factors are considered for an algorithm to reach the required set of outcomes, it is more than possible that these criteria are not always shared transparently with the users and authorities. Other non-life threatening but still dangerous examples include the time when Google Allo, responded with a turban emoji on being asked to suggest three emoji responses to a gun emoji, and when Microsoft’s Twitter bot Tay, who tweeted racist and sexist comments. AI scientists should be taught at the early stages itself that they these values are meant to be at the forefront when deciding on factors such as the design, logic, techniques, and outcome of an AI project. Universities and organizations promoting learning about AI ethics What’s encouraging is that organizations and universities are taking steps (slowly but surely) to promote the importance of teaching ethics to students and employees working with AI or machine learning systems. For instance, The World Economic Forum Global Future Councils on Artificial Intelligence and Robotics has come out with “Teaching AI ethics” project that includes creating a repository of actionable and useful materials for faculties wishing to add social inquiry and discourse into their AI coursework. This is a great opportunity as the project connects professors from around the world and offers them a platform to share, learn and customize their curriculum to include a focus on AI ethics. Cornell, Harvard, MIT, Stanford, and the University of Texas are some of the universities that recently introduced courses on ethics when designing autonomous and intelligent systems. These courses put an emphasis on the AI’s ethical, legal, and policy implications along with teaching them about dealing with challenges such as biased data sets in AI. Mozilla has taken initiative to make people more aware of the social implications of AI in our society through its Mozilla’s Creative Media Awards. “We’re seeking projects that explore artificial intelligence and machine learning. In a world where biased algorithms, skewed data sets, and broken recommendation engines can radicalize YouTube users, promote racism, and spread fake news, it’s more important than ever to support artwork and advocacy work that educates and engages internet users”, reads the Mozilla awards page. Moreover, Mozilla also announced a $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates. Other examples include Google’s AI ethics principles announced back in June, to abide by when developing AI projects, and SAP’s AI ethics guidelines and an advisory panel created last month. SAP says that they have designed these guidelines as it “considers the ethical use of data a core value. We want to create software that enables intelligent enterprise and actually improves people’s lives. Such principles will serve as the basis to make AI a technology that augments human talent”. Other organizations, like Drivendata have come out with tools like Deon, a handy tool that helps data scientists add an ethics checklist to your data science projects, making sure that all projects are designed keeping ethics at the center. Some, however, feel that having to explain how an AI system reached a particular outcome (in the name of transparency) can put a damper on its capabilities. For instance, according to David Weinberger, a senior researcher at the Harvard Berkman Klein Center for Internet & society, “demanding explicability sounds fine, but achieving it may require making artificial intelligence artificially stupid”. Teaching AI ethics- trick or treat? AI has transformed the world as we know it. It has taken over different spheres of our lives and made things much simpler for us. However, to make sure that AI continues to deliver its transformative and evolutionary benefits effectively, we need ethics. From governments to tech organizations to young data scientists, everyone must use this tech responsibly. Having AI ethics in place is an integral part of the AI development process and will shape a healthy future of robotics and artificial intelligence. That is why teaching AI ethics is a sure-shot treat. It is a TREAT that will boost the productivity of humans in AI, and help build a better tomorrow.
Read more
  • 0
  • 0
  • 23474
article-image-ai-deserve-to-be-so-overhyped
Aaron Lazar
28 May 2018
6 min read
Save for later

Does AI deserve to be so Overhyped?

Aaron Lazar
28 May 2018
6 min read
The short answer is yes, and no. The long answer is, well, read on to find out. Several have been asking the question, including myself, wondering whether Artificial Intelligence is just another passing fad like maybe the Google Glass or nano technology. The hype for AI began over the past few years, although if you actually look back at the 60’s it seems to have started way back then. In the early 90s and all the way down to the early 2000’s, a lot of media and television shows were talking about AI quite a bit. Going 25 centuries even further back, Aristotle speaks of not just thinking machines but goes on to talk of autonomous ones in his book, Politics: for if every instrument, at command, or from a preconception of its master's will, could accomplish its work (as the story goes of the statues of Daedalus; or what the poet tells us of the tripods of Vulcan, "that they moved of their own accord into the assembly of the gods "), the shuttle would then weave, and the lyre play of itself; nor would the architect want servants, or the [1254a] master slaves. Aristotle, Politics: A treatise on Government, Book 1, Chapter 4 This imagery of AI has managed to sink into our subconscious minds over the centuries propelling creative work, academic research and industrial revolutions toward that goal. The thought of giving machines a mind of their own, existed quite long ago, but recent advancements in technology have made it much clearer and realistic. The Rise of the Machines The year is 2018. The 4th Industrial Revolution is happening and intelligent automation has taken over. This is the point where I say no, AI is not overhyped. General Electric, for example, is a billion dollar manufacturing company that has already invested in AI. GE Digital has AI systems running through several automated systems. They even have their own IIoT platform called Predix. Similarly, in the field of healthcare, the implementation of AI is growing in leaps and bounds. The Google Deepmind project is able to process millions of medical records within minutes. Although this kind of research is in its early phase, Google is working closely with the Moorfields Eye Hospital NHS Foundation Trust to implement AI and improve eye treatment. AI startups focused on healthcare and other allied areas such as genetic engineering are some of the highly invested and venture capital supported ones in recent times. Computer Vision or image recognition is one field where AI has really proven its power. Analysing datasets like iris has never been easier, paving way for more advanced use cases like automated quality checks in manufacturing units. Another interesting field is Healthcare, where AI has helped sift through tonnes of data, helping doctors diagnose illnesses quicker, manufacture more effective and responsive drugs, and in patient monitoring. The list is endless, clearly showing that AI has made its mark in several industries. Back (up) to the Future Now, if you talk about the commercial implementations of AI, they’re still quite far fetched at the moment. Take the same Computer Vision application for example. Its implementation will be a huge breakthrough in autonomous vehicles. But if researchers have managed to obtain an 80% accuracy for object recognition on roads, the battle is not close to being won! Even if they do improve, do you think driverless vehicles are ready to drive in the snow, through the rain or even storms? I remember a few years ago, Business Process Outsourcing was one industry, at least in India, that was quite fearful of the entry of AI and autonomous systems that might take over their jobs. Machines are only capable of performing 60-70% of the BPO processes in Insurance, and with changing customer requirements and simultaneously falling patience levels, these numbers are terrible! It looks like the end of Moore’s law is here, for AI I mean. Well, you can’t really expect AI to have the same exponential growth that computers did, decades ago. There are a lot of unmet expectations in several fields, which has a considerable number of people thinking that AI isn’t going to solve their problems now, and they’re right. It is probably going to take a few more years to mature, making it a thing of the future, not of the present. Is AI overhyped now? Yeah, maybe? What I think Someone once said, hype is a double-edged sword. If it’s not enough, innovation may become obscure and if it’s too much, expectations will become unreasonable. It’s true that AI has several beneficial use cases, but what about fairness of such systems? Will machines continue to think the way they’re supposed to or will they start finding their own missions that don’t involve benefits to the human race? At the same time, there’s also a question of security and data privacy. GDPR will come into effect in a few days, but what about the prevailing issues of internet security? I had an interesting discussion with a colleague yesterday. We were talking about what the impact of AI could be for us as end-customers, in a developing and young country like India. Do we really need to fear losing our jobs, will we be able to reap the benefits of AI directly or would it be an indirect impact? The answer is, probably yes, but not so soon. If we drew up the hierarchy of needs pyramid for AI, it would look something like the above. For each field to fully leverage AI, it’s going to involve several stages like collecting data, storing it effectively, exploring it, then aggregating it, optimising it with the help of algorithms and then finally achieving AI. That’s bound to take a LOT of time! Honestly speaking, a country like India lacks as much implementation of AI in several fields. The major customers of AI, apart from some industrial giants, will obviously be the government. Although, that is sure to take at least a decade or so, keeping in mind the several aspects to be accomplished first. In the meantime, buddying AI developers and engineers are scurrying to skill themselves up in the race to be in the cream of the crowd! Similarly, what about the rest of the world? Well, I can’t speak for everyone, but if you ask me, AI is a really promising technology and I think we need to give it some time; allow the industries and organisations investing in it to take enough time to let it evolve and ultimately benefit us customers, one way or another. You can now make music with AI thanks to Magenta.js Splunk leverages AI in its monitoring tools    
Read more
  • 0
  • 0
  • 23370

article-image-serverless-computing-aws-lambdas-azure-functions
Vijin Boricha
03 May 2018
5 min read
Save for later

Serverless computing wars: AWS Lambdas vs Azure Functions

Vijin Boricha
03 May 2018
5 min read
In recent times, local servers and on-premises computers are counted as old school. Users and organisations have shifted their focus on Cloud to store, manage, and process data. Cloud computing has evolved in ways that DevOps teams can now focus on improving code and processes rather than focusing on provisioning, scaling, and maintaining servers. This means we have now entered the Serverless era, and the big players of this era are AWS Lambda and Azure Functions. So if you are a developer now you need not worry about low-level infrastructure decision. Coming to the bigger question. What is Serverless Computing / Function-as-a-Service? Serverless Computing / Function-as-a-Service FaaS can be described as a concept of serverless computing where applications depend on third party services to manage server-side logics. This means application developers can concentrate on building their applications rather than thinking about servers. So if you want to build any type of application or backend service, just go about with it as everything required to run and scale your application is already being handled for you. Following are popular platforms that support Faas. AWS Lambda Azure Functions Cloud Functions Iron.io Webtask.io Benefits of Serverless Computing Serverless applications and architectures are gaining momentum and are increasingly being used by companies of all sizes. Serverless technology rapidly reduces production time and minimizes your costs, while you still have the freedom to customize your code, without hindering functionalities. For good reason, the serverless-based software takes care of many of the problems developers face when running systems and servers such as fault-tolerance, centralized logging, horizontal scalability, and deployments, to name a few. Additionally, the serverless pay-per-invocation model can result in drastic cost savings. Since AWS Lambda and Azure Functions are the most popular and widely used serverless computing platforms, we will discuss these services further. AWS Lambda AWS is recognized as one of the largest market leaders for cloud computing. One of the recent services within the AWS umbrella that has gained a lot of traction is AWS Lambda. It is the part of Amazon Web Services that lets you run your code without provisioning or managing servers. AWS Lambda is a compute service that enables you to deploy applications and back-end services that operate with zero upfront cost and requires no system administration. Although seemingly simple and easy to use, Lambda is a highly effective and scalable compute service that provides developers with a powerful platform to design and develop serverless event-driven systems and applications. Pros: Supports automatic scaling Support unlimited number of functions Takes 1 million requests for free, then charges $0.20/1 million invocations, plus $0.00001667/GB per sec Cons: Limited concurrent executions (1000 executions per account) Supports lesser languages in comparison to Azure (JavaScript, Java, C#, and Python) Azure Functions Microsoft provides a solution you can use to easily run small segments of code in the Cloud: Azure Functions. It provides solutions for processing data, integrating systems, and building simple APIs and microservices. Azure Functions help you easily run small pieces of code in cloud without worrying about a whole application or the infrastructure to run it. With Azure functions, you can use triggers to execute your code and bindings to simplify the input and output of your code. Pros: Supports unlimiter concurrent executions Supports C#, JavaScript, F#, Python, Batch, PHP, PowerShell Supports unlimited number of functions Takes 1 million requests for free, then charges $0.20/1 million invocations, plus $0.000016/GB per sec Cons: Manual scaling (App Service Plan) Conclusion When compared with the traditional Client-server approach, serverless architecture saves a lot effort and proves to cost effective for many organisations, no matter its size. The most important aspect of choosing the right platform is understanding which platform benefits your organisation the best. AWS Lambda has been around for a while with infinite support to Linux-based platforms but Azure Functions is not behind in supporting Windows-based suite even after entering the serveless market recently. If you are going to adopt AWS you will be to make the most of its; availability to open source integration, pay-as-you-go model, and high performance computing environment. Azure, on the other hand is easier to use as it’s a Windows platform. It also supports a precise pricing model where they charge by the minute and it has extended support for MacOS and Linux. So if you are looking for a clear winner here you shouldn't be surprised that AWS and Azure are similar in many ways and it would be a tie if it was to choose who is better or worse than the other. This battle would always be heated and experts will be placing their bets on who wins the race. In the end, the entire discussion would drill down to what your business needs. After all the mission would always be to grow your business at a marginal cost. The Lambda programming model How to Run Code in the Cloud with AWS Lambda Download Microsoft Azure serverless computing e-book for free
Read more
  • 0
  • 0
  • 23334
Modal Close icon
Modal Close icon