Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides

852 Articles
article-image-mxnet-versatile-dl-framework
Aaron Lazar
05 Sep 2017
5 min read
Save for later

Why MXNet is a versatile Deep Learning framework

Aaron Lazar
05 Sep 2017
5 min read
Tools to perform Deep Learning tasks are in abundance. You have programming languages that are adapted for the job or those specifically created to get the job done. Then, you have several frameworks and libraries which allow data scientists to design systems that sift through tonnes of data and learn from it. But a major challenge for all tools lies in tackling two primary issues: The size of the data The speed of computation Now, with petabytes and exabytes of data, it’s become way more taxing for researchers to handle. Take image processing for example. ImageNet itself is such a massive dataset consisting of trillions of images from several distinct classes that tackling this scale is a serious lip-biting affair. The speed at which researchers are able to get actionable insights from the data is also an important factor. Powerful hardware like multi-core GPUs, rumbling with raw power and begging to be tamed, have waltzed into the mosh pit of big data. You may try to humble these mean machines with old school machine learning stacks like R, SciPy or NumPy, but in vain. So, the deep learning community developed several powerful libraries to solve this problem, and they succeeded to an extent. But two major problems still existed - the frameworks failed to solve the problems of efficiency and flexibility together. This is where a one-of-a-kind, powerful, and flexible library like MXNet rises up to the challenge and makes developers’ lives a lot easier. What is MXNet? MXNet sits happy at over 10k stars on Github and has recently been inducted into the Apache Software Foundation. It focuses on accelerating the development and deployment of Deep Neural Networks at scale. This means exploiting the full potential of multi-core GPUs to process tonnes of data at blazing fast speeds. We’ll take a look at some of MXNet’s most interesting features over the next few minutes. Why is MXNET so good? Efficient MXNet is backed by a C++ backend which allows it to be extremely fast on even a single machine. It allows for automatically parallelizing computation across devices as well as synchronising the computation when multithreading is introduced. Moreover, the support for linear scaling means that not only the number of GPUs can be scaled, but MXNet also supports heavily distributed computing by scaling the number of machines as well. Moreover, MXNet has a graph optimisation layer that sits on top of a dynamic dependency scheduler, which enhances memory efficiency and speed. Extremely portable MXNet is extremely portable, especially as it can be programmed in umpteen languages such as C++, R, Python, Julia, JavaScript, Go, and more. It’s widely supported across most operating systems like Linux, Windows, iOS as well as Android making it multi-platform, including low level platforms. Moreover, it works well in cloud environments, like AWS - one of the reasons AWS has officially adopted MXNet as its deep learning framework of choice. You can now run MXNet from the Deep Learning AMI. Great for data visualization MXNet uses not only the mx.viz.plot_network method for visualising neural networks but it also has in-built support for Graphviz, to visualise neural nets as a computation graph. Check Joseph Paul Cohen’s blog for a great side-by-side visualisation of CNN architectures in MXNet. Alternatively, you could strip the TensorBoard off TensorFlow and use it with MXNet. Jump here for more info on that. Flexible MXNet supports both imperative and declarative/symbolic styles of programming, allowing you to blend both styles for increased efficiency. Libraries like Numpy and Torch support plain imperative programming, while TensorFlow, Theano, and Caffe support plain declarative programming. You can get a closer look at what these styles actually are here. MXNet is the only framework so far that mixes both styles to maximise efficiency and productivity. In-built profiler MXNet comes packaged with an in-built profiler that lets you profile execution times, layer-by-layer, in the network. Now, while you’ll be interested in using your own general profiling tools like gprof and nvprof to profile at kernel, function, or instruction level, the in-built profiler is specifically tuned to provide detailed information at a symbol or operator level. Limitations While MXNet has a host of attractive features that explain why it has earned public admiration, it has its share of limitations just like any other popular tool. One of the biggest issues encountered with MXNet is that it tends to give varied results when compile settings are modified. For example, a model would work well with cuDNN3 but wouldn’t with cuDNN4. To overcome issues like this, you might have to spend some time on forums. Moreover, it is a daunting task to write your own operators or layers in C++, to achieve efficiency. Although, with v0.9, the official documentation mentions that it has become easier. Finally, the documentation is introductory and is not organised well enough to create custom operators or to perform other advanced tasks. So, should I use MXNet? MXNet is the new kid on the block that supports modern deep learning models like CNNs and LSTMs. It boasts of immense speed, scalability, and flexibility to solve your deep learning problems and consumes as little as 4 Gigs of memory when running deep networks with almost a thousand layers. The core library including its dependencies can mash into a single C++ source file, which can be compiled on both Android and iOS, as well as a browser with the JavaScript extensions. But like all other libraries, it has it’s own hurdles, which aren’t that life threatening enough, to prevent one from getting the job done, and a good one at that! Is that enough to get you excited and start using MXNet? Go get working then! And don’t forget to tell us your experiences of working with MXNet.
Read more
  • 0
  • 0
  • 21429

article-image-why-microservices-and-devops-are-match-made-heaven
Erik Kappelman
12 Oct 2017
4 min read
Save for later

Why microservices and DevOps are a match made in heaven

Erik Kappelman
12 Oct 2017
4 min read
What are microservices? In terms of software, ‘services’ could be thought of as little chunks of functionality. Services are a part of service-oriented architecture (SOA). Services are stateless, adhere to a contract (shared standards), are autonomous, relatively granular, and should be a ‘black-box’ for the user. Microservices are a logical extension of services. Microservices are services that perform only one function. This matches the Unix philosophy, “Do one thing, and do it well.” But who cares? And what about DevOps? Well, although the title is a cliche, DevOps and microservices based architecture are absolutely a match made in heaven. Following the philosophy of fully explaining terms, let's talk a bit about DevOps. What is DevOps? DevOps, which come from the words "development operations," is a process that is used to create software. DevOps is not one specific philosophy or process, there are many variants, but there are some shared features across most of the variants. DevOps advocates for a continuous development process where as many elements of this process are as automated as possible. Each iteration of a product is coded, built, tested, packaged, and released, and then monitored. This is referred to as the DevOps toolchain. When there is a need or desire to upgrade or change functionality or the way a product is designed, the process begins again. The idea is that DevOps should run in a circular fashion, always upgrading and always getting better. There are myriad tools in use right now within various flavors of DevOps. These tool are designed to meld with the DevOps tool chain and have revolutionized the development process for many developers and companies. Why microservices and DevOps go together You should already see why microservices and DevOps go together so well. DevOps calls for continuous monitoring, testing, and deployment of software. Microservices are inherently modular, because they are intended to perform a single function. Software that is modular easily fits into the DevOps structure. Incremental changes can be made to parts of a project, perhaps a single microservice. If the service contracts and control mechanisms are properly created, a single microservice should be able to be easily upgraded, built, tested, deployed and monitored without sending a cascading wave of bugs through adjacent services. DevOps really doesn’t make much sense outside of a structure like this. If your software is designed as a behemoth interconnected, interdependent ball of wax, changing part of the functionality will ‘break’ everything. This means that as changes or upgrades to a software are made, almost every change, no matter how big or small, will trigger what amounts to an almost full rewrite, or upgrade of the software in question. When applied to this kind of project, most DevOps processes would actually hinder the development process instead of helping. When projects are modularized at a relatively granular level, such as when a project is employing a microservice based structure, DevOps expedites delivery time and quality simultaneously. It should be noted that both a microservice architecture and DevOps processes are not tied to any specific tools or languages. These are philosophies for development and could be applied many different ways. That being said, there are many continuous integration and deployment tools, as well as, many different automation tools that are designed for use within a DevOps framework. Criticisms of microservices There are some criticisms of the microservice structure. One criticism is that using microservices does not get rid of the complexities of a traditional program. Those complexities are just moved onto the network that the services are using to communicate. Stress to the network is another criticism of the microservice architecture. This is because the service architecture distributes the elements of a program or process around a network in various places. In order for these services to perform cohesive functions, they must utilize the network to communicate. Depending on how many services make up a program or process, this could translate into significant network activity, which then creates problems of its own. Another criticism is that microservices can sometimes become, so called, ‘nanoservices.’ This is a service that performs a function so small that cost of the service actually outweighs the utility of the service. These criticisms should be kept in mind, but, in my opinion, they don’t amount to enough to impact the usual functionality of microservices in a DevOps environment. Using microservices in the DevOps process helps fully realize the potential of continuous integration, testing a deployment promised by DevOps. These tools combined can optimize computing in a manner that should be utilized during development whenever possible.
Read more
  • 0
  • 0
  • 21347

article-image-is-serverless-architecture-a-good-choice-for-app-development
Mehul Rajput
11 Oct 2019
6 min read
Save for later

Is serverless architecture a good choice for app development?

Mehul Rajput
11 Oct 2019
6 min read
App development has evolved rapidly in recent years. With new demands and expectations from businesses and users, trends like cloud have helped developers to be more productive and to build faster, more reliable and secure applications. But, there’s no end to evolution - and serverless is arguably the next step for application development. But is a serverless architecture the right choice? What is a Serverless Architecture? When you hear the word sever-less, you might assume that it means no servers. In actual fact it really refers to the elimination of the need to manage the servers. Instead, it shifts that responsibility to your cloud provider. Simply, it means that the constituent parts of an application are divided between multiple servers, with no need for the application owner/manager to create or manage the infrastructure that supports it. Instead of running off a server, with a serverless architecture, it runs off functions. These are essentially actions that are fired off to ensure things happen within the application. This is where the phrase ‘function-as-a-service’, or FaaS, (another way of describing serverless) comes from.  A recent report claims that the FaaS market is projected to grow up to 32.7% by 2021, by 7.72 billion US dollars. Is Serverless Architecture a Good Choice for App Development? Now that we’ve established what the serverless actually means, we must get to business. Is serverless architecture the right choice for app development? Well, it can work either way. It can be positive as well as negative. Here are some reasons: Using serverless for app development: the positives There are many reasons because of which serverless architecture can be good for app development and should be used. Some of them are discussed below: Decreasing costs Easier for service Scalability Third-party services Decreasing costs The most effective use of a serverless architecture in an app development process is that it reduces the costs of the work.It’s typically less expensive a ‘traditional’ server architecture. The reason is that on hardware servers, you have to pay for many different things that might not be  required. For example, you won’t have to pay for regular maintenance, the grounds, the electricity, and staff maintenance. Hence, you can save a considerable amount of money and use that for app quality as well. Easier for service It is a rational thought that when the owner or the app manager will not have to manage the server themselves, and a machine can do this job, then it won’t be as challenging to make the service accessible. As it will make the job more comfortable because it will not require supervision. Second, you will not have to spend time on it. Instead, you can use this time for productive work such as product development. Third, the service by this technology is reliable, and hence you can easily use it without much fear. Scalability Now another interestingly useful advantage of serverless architecture in app development is scalability. So, what is scalability? Well, it is the phenomenon by which a system handles an extra amount of work by adding resources to the system. It is the capability of an app or product to continue to work appropriately without disturbance when it is reformed in size or volume to meet any users need. So, serverless architecture act as the resource that is added to the system to handle any work that has piled up. Third-party services Another essential and useful feature of the serverless architecture is that, going this way you can use third-party services. Hence, your app can use any third-party service it requires other than what you already have. This way, the struggle needed to create the backend architecture of the app reduces. Additionally the third-party might provide us with better services than we already have. Hence, eventually, serverless architecture proves to be better as it provides the extent of a third-party. Serverless for app development: negatives Now we know all the advantages of a serverless architecture, it’s important to note that it can also it  bring some limitations and disadvantages. These are: Time restrictions Vendor lock-in Multi-tenancy Debugging is not possible Time restrictions As mentioned before, serverless architecture works on FaaS rules and has a time limit for running a function. This time limit is 300 seconds exactly. So, when this limit is reached, the function is stopped. Therefore, for more complex functions that require more time to execute, FaaS approach may not be a good choice. Although this problem can be tackled in a way that the problem is solved easily, to do this, we can split a task into several simpler functions if the task allows it. Otherwise, time restrictions like these can cause great difficulty. Vendor lock-in We have discussed that by using serverless architecture, we can utilize with third party services. Well, it can also go in the wrong way and cause vendor lock-in. If, for any reason, you decide to shift to a new service provider, in most cases services will be fulfilled in a different way. That means the productivity gains you expected from serverless will be lost as you will have to adjust and reconfigure the infrastructure to accept the new service. Multi-tenancy Multi-tenancy is an increasing problem in serverless architecture. The data of many tenants are kept quite near to each other. This can create  confusion. Some data might be exchanged, distributed, or probably lost. In turn, this can cause security and reliability issues. A customer could, for example, suddenly produce an extraordinarily high load which would affect other customers' applications. Debugging is not possible Debugging isn’t possible with serverless. As Serverless Architecture is a place where the data is being stored, it doesn’t have a debugging facility where the uploaded code can be debugged. If you want to know the function, run or perform it and wait for the result. The result can crash in the function and you cannot do anything about this. However, there is a way to resolve this problem, as well. You can use extensive logging which with every step being logged, decreases the chances of errors that cause debugging issues. Conclusion Serverless architecture certainly seems impressive in spite of having some limitations. There is no doubt that the viability and success of architectures depend on the business requirements and of course on the technology used. In the same way, serverless can sparkle bright if used in the appropriate case. I hope this blog might have helped you in the understanding of Serverless architecture for mobile apps and might be able to see it's both bright and dark sides. Author Bio Mehul Rajput is a CEO and co-founder of Mindinventory which specializes in Android and iOS app development and provide web and mobile app solutions from startup to enterprise level businesses. He is an avid blogger and writes on mobile technologies, mobile app, app marketing, app development, startup and business.   What is serverless architecture and why should I be interested? Introducing numpywren, a system for linear algebra built on a serverless architecture Serverless Computing 101 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 2
Read more
  • 0
  • 0
  • 21339

article-image-why-should-your-e-commerce-site-opt-for-headless-magento-2
Guest Contributor
09 Apr 2019
5 min read
Save for later

Why should your e-commerce site opt for Headless Magento 2?

Guest Contributor
09 Apr 2019
5 min read
Over the past few years, headless e-commerce has been pretty much in talks often being touted as the ‘future of e-commerce’. Last year, in a joint webinar conducted by Magento on ten B2B eCommerce trends for 2018, BORN and Magento predicted that headless CMS commerce will become a popular type of website architecture in the coming year and beyond, for both B2B and B2C businesses. Magento Headless is one such headless CMS which is gaining rapid popularity. Those who have recently jumped on the Magento bandwagon might find it new and hard to grasp. So, in this the post we will give a brief on what is headless Magento, its benefits, and why should you opt for Headless Magento 2. What is a Headless Browser? Headless browsers are basically software-enabled browsers offering a separate user interface. You can automate various actions of your website and monitor the performance under different circumstances. If you’re working under command line instructions in a headless browser, then there is no GUI. With the help of a headless browser, one can actually view several things such as the dimensions of a web layout, font family and other design elements used in a particular website. Headless browsers are mainly used to test web pages. Earlier, e-commerce websites like Magento, Shopify used to have their back-end and front-end tightly integrated. After the introduction of the headless architecture, front-end separated from the back-end. As a result, both parts work independently. There are various e-Commerce platforms that support a headless approach such as Magento, BigCommerce, Shopify, and many more. With Shopify, if you already have a website you can take the headless approach and use Shopify for your sales with links to it from your main website. Like Shopify, BigCommerce also offers a good range of themes and templates to make sure stores look professional and get up-and-running fast. The platform incorporates a full-featured CMS that allows you to run an entire website, not just your online store. Going headless with browsers means that you are running the output in a non-graphical environment such as Linux terms, without X-Windows or Wayland. For Google search algorithms, headless browsers play quite a crucial role. The search engine strongly recommends using Headless architecture because it helps Google to create Ajax websites on the web. So, websites which integrate the headless browser on the web server get easy access by the search engine. This is because Google can access the Ajax website program on the server before making it available for search engine rendering. What are the benefits of using a Headless browser? Going headless has a variety of benefits. If you are a web designer, the markup in HTML would be simple to understand. This is because PHP code, complex JavaScript, widgets are no longer in use – just a plain HTML with some kind of additional placeholder-syntax is required. This also means that the HTML page could be served statically, bringing the application load time down by a significant amount. From the e-commerce perspective, it acts as a viable channel for sales, where a majority of traffic comes from mobile. With the advent of disruptive technologies such as Headless browser for Magento, the path to purchase has expanded. Today it not only includes mobile traffic but even features a complex matrix of buyer touchpoints. Why use a headless browser for Magento? Opting for Magento featuring a headless browser has benefits of its own. Because in Magento, JavaScript coded parts are loosely coupled but the scope of flexibility widens when it comes to choosing any of frameworks such as AngularJs, VueJs, and others. In other words, one does not need to be a Magento developer to build on Magento. All you have to do is focus on a REST API. For instance, if a single page is loading 15 different resources of course from different URLs, the HTML document would become optimal. Magento is a flexible framework that can be used to create your own logic such as pricing, logins, checkout, etc. With a headless browser, Magento gets a clear performance boost. All static parts of the pages are loaded quickly, and the dynamic parts of the pages are loaded lazily through Ajax. In Magento 2, you have the additional support of full page cache. It may even interest you to know that Magento 2 offers Private Content which is equipped to handle lazy loading more efficiently. Isn't Magento 2 headless already? One of the common misconceptions that I have come across is people believing that Magento 2 is already headless. For Magento to be headless, a JavaScript developer requires knowing KnockoutJS ViewModels to make use of the Magento logic. If the ViewModels do not suffice, a Magento developer should add backend logic to this JavaScript layer. However, this also applies to go 100% headless: Whenever a REST resource won’t be available, professionals must make headless for Magento 2 available. To be or not to be headless will always be in talks because an HTML document not only contains the templating part but also static content that shouldn’t be changed. For me, a headless approach is best suited for businesses who have a CMS website for B2C and B2B storefront. It is also popular among those who are looking to put their JavaScript teams back to work. Author Bio Olivia Diaz is working in eTatvaSoft. Being a tech geek, she keeps a close watch over the industry focusing on the latest technology news and gadgets.
Read more
  • 0
  • 0
  • 21319

article-image-hackers-are-our-societys-immune-system-keren-elazari-on-the-future-of-cybersecurity
Amrata Joshi
15 Dec 2018
9 min read
Save for later

Hackers are our society’s immune system - Keren Elazari on the future of Cybersecurity

Amrata Joshi
15 Dec 2018
9 min read
Keren Elazari, a world renowned cybersecurity analyst and senior researcher at the Tel Aviv University Interdisciplinary Cyber Research Center, author and speaker spoke earlier this year at Six, about the future of cybersecurity and a range of real world attacks in recent years. She also dived into the consequences as well as possible motivations behind such attacks. The Six event covers various press conferences and hackathons. The Six event organizes around one billion security events on a daily basis. The cybersecurity events organized by Six has international experts who answer various questions and give insights on various topics. This article highlights few insights from this year’s Six on Cybersecurity talk by Keren Elazari on The Future of Cybersecurity from a hacker’s perspective. How hackers used Starbucks’ free WiFi to use customer CPU resources for crypto mining “What if I told you that in 10 seconds I could take over your computer, generate thousands of dollars worth of cryptocurrencies all while you are drinking your morning coffee? You might think it’s impossible, by this is exactly what happened in Argentina earlier this year.” - Keren Elazari Earlier this year, the Starbucks customers  at Argentina experienced a slight delay of 10 seconds after logging into the website for free Wi-Fi. So what exactly happened? A security researcher discovered that the computer was running Coinhive, a type of distributed cryptocurrency mining software for those ten seconds. It was running on all the machines in Argentinian Starbucks that logged in for free Wi-Fi and the software generated a lot of  monero, the cryptocurrency (money). The hacker didn’t even have to code a JavaScript for this attack as he just had to buy the code from Coinhive. The business model of the company behind Coinhive allows anyone to monetize the user’s CPU. Cyber criminals can earn a lot of money from technologies like Coinhive. There are actually some news sites in the US that are looking at using such coinhiving solution as an alternative to paying for the news. This is an example of how creative technologies made by cybercriminals can even generate completely new business models. IoT brings a whole new set of vulnerabilities to your ecosystem “According to the Munich security conference report, they are expecting this year double the amount of devices than there are humans on this planet. This is not going to change. We definitely need an immune system for new digital universe because it is expanding without a stop.”   Devices like cameras, CCTVs, webcams etc could be used by potential hackers to spy of users. But even if measures such as blocking its vision with tape is taken, web cams can be hacked, not with an intention to steal pictures but to hack of other devices. How the Mirai DDoS attack used webcams to bring down the likes of Airbnb and Amazon This is what happened 2 years ago, when the massive internet DDoS attack - Mirai took place. Over the course of a weekend it took down websites all over the world. Websites like Amazon, Airbnb, and large news sites etc were down, due to which these companies faced losses. This attack was supercharged by the numerous devices in people’s homes. These devices where for DDoS attack because they were using basic internet protocols such as DNS which can be easily subverted. Even worse, many of the devices used default username password combinations. It’s important to change the passwords for the newly purchased devices. With shodan, a search engine, one can check the internet connected devices in their organizations or at home. This is helpful as it improves protection for the organizations from getting hacked. How hackers used a smart fish tank to steal data from a casino and an AI caught it “Hackers have found very creative, very fast automatic ways to identify devices that they can use and they will utilize any resource online. It would just become a part of their digital army. Speaking of which even an aquarium, a fish tank was hacked recently.” Recently, a smart fish tank in a US Casino was hacked. It had smart sensors that would check the temperature and the feeding schedule of the fish and the salinity of the water. While, hacking a fish tank does not appear to have any monetary incentive to a hacker, its connection to the internet make it a valuable access point.. The hackers, who already had access to the casino network, used the outgoing internet connection of the aquarium to send out 10 gigabytes of data from the casino. As the data was going of this connection, there was no firewall and it got noticed by none. The suspicious activity was flaggedby a self learning algorithm which realized that there was something fishy as the outgoing connection had no relation with the fish tank setup. How WannaCry used Ransomware attacks to target organizations “I don’t think we should shame organizations for having to deal with ransomware because it is a little bit like a flu in a sense that these attacks are designed to propagate and infect as many computers as they can.”- Keren Elazari In May 2017, the WannaCry ransomware attack by the WannaCry ransomware cryptoworm, affected the computers running the Microsoft Windows operating system by encrypting data and the criminals demanded ransom payments in the Bitcoin cryptocurrency. This attack affected the UK National Health Service the most as according to NHS, 30% of that national health services were not functioning. 80 out of the 236 trusts got affected in England. As per the UK government, North Korea was behind this attack as they are need of money because they are under sanctions. The Lazarus Group, a cybercrime group from North Korea attacked the Swift infrastructure and also attacked the central bank of Bangladesh last year. NotPetya - The Wiper attack “Whoever was hacking the tax company in the Ukraine wanted to create an effective virus that would destroy the evidence of everything they have been doing for two years in a bunch of Ukrainian companies. It might have been an accident that it infected so many other companies in the world.” In June, 2017, NotPetya, a wiper attack affected enterprise networks across Europe. The Ukrainian companies got highly affected. This attack appeared like a ransomware attack as it demanded some payment but it actually was a wiper attack. This attack affected the data and wiped off the data stored for two years. Maersk, the world's largest container shipping company got highly affected by this attack. The company faced a heavy loss of amount $300 million and was a collateral damage. Out-of-life operation systems were most affected by this virus. The software vulnerability used in both of these attacks, ransomware and wiper was a code named, EternalBlue, a cyber weapon which was discovered and developed by National Security Agency (NSA). The NSA couldn’t keep a track of EternalBlue and the criminals took advantage of this and attacked using using this cyber weapon. Earlier this year, a cyber attack was made on the German government IT network. This attack affected the defence and interior ministries' private networks. Why might motivate nation state actors back cyber-attacks? “The story is never simple when it comes to cyber attackers. Sometimes the motivations of a nation or nation state actors can be hidden behind what seems like a financial or criminal activity.” One of the reasons behind a nation or state backing a cyber-attack could be the the financial aspect, they might be under sanctions and need money for developing nuclear weapons. Another reason could be that the state or country is in a state of chaos or confusion and it is trying to create a dynamic from which they could benefit. Lastly, it could be an accident, where the cyber attack sometimes gets more effective than what the state has ever imagined of. What can organizations do to safeguard themselves from such cyberattacks? Consider making hundreds of security decisions everyday while putting personal details like credit card on a website, downloading a software that cause trouble to the system, etc. Instead of using a recycled password, go for a new one. Educating employees in the organizations about penetration testing. Sharing details of the past experience with regards to hacking, will help in working towards it. Developing a cybersecurity culture in the organization will bring change. Invite a Red team to the organizations to review the system. Encourage Bug Bounty Programs for reporting bugs in organization. Security professionals can work in collaboration with programs like Mayhem. Mayhem is an automated system that helps in finding the bugs in a system. It won the hacking challenge in 2016 but beaten by humans the next year. “Just imagine you are in a big ball room and you are looking at the hacking competition between completely automated supercomputers  and this (Mayhem) ladies and gentlemen is the winner and I think is also the future.” Just two years ago, Mayhem, a machine won in a hacking competition organized by United  States Defense Advanced Research Projects Agency (DARPA), Las Vegas, where seven machines (supercomputers) competed against each other. Mayhem is the first non-human to win a hacking competition. In 2017, Mayhem competed against humans, though humans won it. But we can still imagine how smart are smart computers. What does the Future of Cybersecurity look like? “In the years to come, automation, machine learning, algorithms, AI will be an integral part, not just of every aspect of society, but [also an] integral part of cybersecurity. That’s why I believe we need more such technologies and more humans that know how to work alongside and together with these automated creatures. If you like me think that friendly hackers, technology, and building  an ecosystem will a good way to create a safer society, I hope you take the red pill and wake up to this reality,” concludes Elazari. As 2018 comes to a close plagued with security breaches across industries, Keren’s insightful talk on cybersecurity is a must watch for everyone entering 2019. Packt has put together a new cybersecurity bundle for Humble Bundle 5 lessons public wi-fi can teach us about cybersecurity Blackberry is acquiring AI & cybersecurity startup, Cylance, to expand its next-gen endpoint solutions like its autonomous cars’ software
Read more
  • 0
  • 0
  • 21293

article-image-a-serverless-online-store-on-aws-could-save-you-money-build-one
Savia Lobo
14 Jun 2018
9 min read
Save for later

A serverless online store on AWS could save you money. Build one.

Savia Lobo
14 Jun 2018
9 min read
In this article you will learn to build an entire serverless project of an AWS online store, beginning with a React SPA frontend hosted on AWS followed by a serverless backend with API Gateway and Lambda functions. This article is an excerpt taken from the book, 'Building Serverless Web Applications' written by Diego Zanon. In this book, you will be introduced to the AWS services, and you'll learn how to estimate costs, and how to set up and use the Serverless Framework. The serverless architecture of AWS' online store We will build a real-world use case of a serverless solution. This sample application is an online store with the following requirements: List of available products Product details with user rating Add products to a shopping cart Create account and login pages For a better understanding of the architecture, take a look at the following diagram which gives a general view of how different services are organized and how they interact: Estimating costs In this section, we will estimate the costs of our sample application demo based on some usage assumptions and Amazon's pricing model. All pricing values used here are from mid 2017 and considers the cheapest region, US East (Northern Virginia). This section covers an example to illustrate how costs are calculated. Since the billing model and prices can change over time, always refer to the official sources to get updated prices before making your own estimations. You can use Amazon's calculator, which is accessible at this link: http://calculator.s3.amazonaws.com/index.html. If you still have any doubts after reading the instructions, you can always contact Amazon's support for free to get commercial guidance. Assumptions For our pricing example, we can assume that our online store will receive the following traffic per month: 100,000 page views 1,000 registered user accounts 200 GB of data transferred considering an average page size of 2 MB 5,000,000 code executions (Lambda functions) with an average of 200 milliseconds per request Route 53 pricing We need a hosted zone for our domain name and it costs US$ 0.50 per month. Also, we need to pay US$ 0.40 per million DNS queries to our domain. As this is a prorated cost, 100,000 page views will cost only US$ 0.04. Total: US$ 0.54 S3 pricing Amazon S3 charges you US$ 0.023 per GB/month stored, US$ 0.004 per 10,000 requests to your files, and US$ 0.09 per GB transferred. However, as we are considering the CloudFront usage, transfer costs will be charged by CloudFront prices and will not be considered in S3 billing. If our website occupies less than 1 GB of static files and has an average per page of 2 MB and 20 files, we can serve 100,000 page views for less than US$ 20. Considering CloudFront, S3 costs will go down to US$ 0.82 while you need to pay for CloudFront usage in another section. Real costs would be even lower because CloudFront caches files and it would not need to make 2,000,000 file requests to S3, but let's skip this detail to reduce the complexity of this estimation. On a side note, the cost would be much higher if you had to provision machines to handle this number of page views to a static website with the same availability and scalability. Total: US$ 0.82 CloudFront pricing CloudFront is slightly more complicated to price since you need to guess how much traffic comes from each region, as they are priced differently. The following table shows an example of estimation: RegionEstimated trafficCost per GB transferredCost per 10,000 HTTPS requestsNorth America70%US$ 0.085US$ 0.010Europe15%US$ 0.085US$ 0.012Asia10%US$ 0.140US$ 0.012South America5%US$ 0.250US$ 0.022 As we have estimated 200 GB of files transferred with 2,000,000 requests, the total will be US$ 21.97. Total: US$ 21.97 Certificate Manager pricing Certificate Manager provides SSL/TLS certificates for free. You only need to pay for the AWS resources you create to run your application. IAM pricing There is no charge specifically for IAM usage. You will be charged only by what AWS resources your users are consuming. Cognito pricing Each user has an associated profile that costs US$ 0.0055 per month. However, there is a permanent free tier that allows 50,000 monthly active users without charges, which is more than enough for our use case. Besides that, we are charged for Cognito Syncs of our user profiles. It costs US$ 0.15 for each 10,000 sync operations and US$ 0.15 per GB/month stored. If we estimate 1,000 active and registered users with less than 1 MB per profile, with less than 10 visits per month in average, we can estimate a charge of US$ 0.30. Total: US$ 0.30 IoT pricing IoT charges starts at US$ 5 per million messages exchanged. As each page view will make at least 2 requests, one to connect and another to subscribe to a topic, we can estimate a minimum of 200,000 messages per month. We need to add 1,000 messages if we suppose that 1% of the users will rate the products and we can ignore other requests like disconnect and unsubscribed because they are excluded from billing. In this setting, the total cost would be of US$ 1.01. Total: US$ 1.01 SNS pricing We will use SNS only for internal notifications, when CloudWatch triggers a warning about issues in our infrastructure. SNS charges US$ 2.00 per 100,000 e-mail messages, but it offers a permanent free tier of 1,000 e-mails. So, it will be free for us. CloudWatch pricing CloudWatch charges US$ 0.30 per metric/month and US$ 0.10 per alarm and offers a permanent free tier of 50 metrics and 10 alarms per month. If we create 20 metrics and expect 20 alarms in a month, we can estimate a cost of US$ 1.00. Total: US$ 1.00 API Gateway pricing API Gateway starts charging US$ 3.50 per million of API calls received and US$ 0.09 per GB transferred out to the Internet. If we assume 5 million requests per month with each response with an average of 1 KB, the total cost of this service will be US$ 17.93. Total: US$ 17.93 Lambda pricing When you create a Lambda function, you need to configure the amount of RAM memory that will be available for use. It ranges from 128 MB to 1.5 GB. Allocating more memory means additional costs. It breaks the philosophy of avoiding provision, but at least it's the only thing you need to worry about. The good practice here is to estimate how much memory each function needs and make some tests before deploying to production. A bad provision may result in errors or higher costs. Lambda has the following billing model: US$ 0.20 per 1 million requests US$ 0.00001667 GB-second Running time is counted in fractions of seconds, rounding up to the nearest multiple of 100 milliseconds. Furthermore, there is a permanent free tier that gives you 1 million requests and 400,000 GB-seconds per month without charges. In our use case scenario, we have assumed 5 million requests per month with an average of 200 milliseconds per execution. We can also assume that the allocated RAM memory is 512 MB per function: Request charges: Since 1 million requests are free, you pay for 4 million that will cost US$ 0.80. Compute charges: Here, 5 million executions of 200 milliseconds each gives us 1 million seconds. As we are running with a 512 MB capacity, it results in 500,000 GB-seconds, where 400,000 GB-seconds of these are free, resulting in a charge of 100,000 GB-seconds that costs US$ 1.67. Total: US$ 2.47 SimpleDB pricing Take a look at the following SimpleDB billing where the free tier is valid for new and existing users: US$ 0.14 per machine-hour (25 hours free) US$ 0.09 per GB transferred out to the internet (1 GB is free) US$ 0.25 per GB stored (1 GB is free) Take a look at the following charges: Compute charges: Considering 5 million requests with an average of 200 milliseconds of execution time, where 50% of this time is waiting for the database engine to execute, we estimate 139 machine hours per month. Discounting 25 free hours, we have an execution cost of US$ 15.96. Transfer costs: Since we'll transfer data between SimpleDB and AWS Lambda, there is no transfer cost. Storage charges: If we assume a 5 GB database, it results in US$ 1.00, since 1 GB is free. Total: US$ 16.96, but this will not be added in the final estimation since we will run our application using DynamoDB. DynamoDB DynamoDB requires you to provision the throughput capacity that you expect your tables to offer. Instead of provisioning hardware, memory, CPU, and other factors, you need to say how many read and write operations you expect and AWS will handle the necessary machine resources to meet your throughput needs with consistent and low-latency performance. One read capacity unit represents one strongly consistent read per second or two eventually consistent reads per second, where objects have a size up to 4 KB. Regarding the writing capacity, one unit means that you can write one object of size 1 KB per second. Considering these definitions, AWS offers in the permanent free tier 25 read units and 25 write units of throughput capacity, in addition to 25 GB of free storage. It charges as follows: US$ 0.47 per month for every Write Capacity Unit (WCU) US$ 0.09 per month for every Read Capacity Unit (RCU) US$ 0.25 per GB/month stored US$ 0.09 GB per GB transferred out to the Internet Since our estimated database will have only 5 GB, we are on the free tier and we will not pay for transferred data because there is no transfer cost to AWS Lambda. Regarding read/write capacities, we have estimated 5 million requests per month. If we evenly distribute them, we will get two requests per second. In this case, we will consider that it's one read and one write operation per second. We need to estimate now how many objects are affected by a read and a write operation. For a write operation, we can estimate that we will manipulate 10 items on average and a read operation will scan 100 objects. In this scenario, we would need to reserve 10 WCU and 100 RCU. As we have 25 WCU and 25 RCU for free, we only need to pay for 75 RCU per month, which costs US$ 6.75. Total: US$ 6.75 Total pricing Let's summarize the cost of each service in the following table: ServiceMonthly CostsRoute 53US$ 0.54S3US$ 0.82CloudFrontUS$ 21.97CognitoUS$ 0.30IoTUS$ 1.01CloudWatchUS$ 1.00API GatewayUS$ 17.93LambdaUS$ 2.47DynamoDBUS$ 6.75TotalUS$ 52.79 It results in a total cost of ~ US$ 50 per month in infrastructure to serve 100,000 page views. If you have a conversion rate of 1%, you can get 1,000 sales per month, which means that you pay US$ 0.05 in infrastructure for each product that you sell. Thus, in this article you learned the serverless architecture of AWS online store also learned how to estimate its costs. If you've enjoyed reading the excerpt, do check out, Building Serverless Web Applications to monitor the performance, efficiency and errors of your apps and also learn how to test and deploy your applications. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Serverless computing wars: AWS Lambdas vs Azure Functions Using Amazon Simple Notification Service (SNS) to create an SNS topic
Read more
  • 0
  • 0
  • 21268
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-ibm-watson-transforming-healthcare
Kunal Parikh
29 Sep 2017
5 min read
Save for later

How IBM Watson is paving the road for Healthcare 3.0

Kunal Parikh
29 Sep 2017
5 min read
[box type="shadow" align="" class="" width=""]Matt Kowalski (in Gravity): Houston, in the blind.[/box] Being an oncologist is a difficult job. Every year, 50,000 research papers are published on just Oncology. If an Oncologist were to read every one of them, it will take nearly 29 hours of reading every workday to stay updated on this plethora of information. Added to this is the challenge of dealing with nearly 1000 patients every year. Needless to say, a modern-day physician is bombarded with information that doubles every three years. This wide gap between the availability of information and the ability to access it in a manner that’s practically useful is simply getting wider. No wonder doctors and other medical practitioners can feel overwhelmed and lost in space, sometimes! [box type="shadow" align="" class="" width=""]Mission Control: Shariff, what's your status? Shariff: Nearly there.[/box] Advances in the field of Big Data and cognitive computing are helping make strides in solving this kind of pressing problems facing the healthcare industry. IBM Watson is at the forefront of solving such scenarios and as time goes by the system will only become more robust. From a strict technological standpoint, the new applications of Watson are impressive and groundbreaking: The system is capable of combing through 600,000 pieces of medical evidence, 2 million pages of text from 42 medical journals and clinical trials in the area of oncology research, and 1.5 million patient records to provide on-the-spot treatment recommendations to health care providers. According to IBM, more than 90 percent of the nurses who have worked with Watson follow the guidance the system gives to them. - Infoworld Watson, who? IBM Watson is an interactive expert system that uses cognitive computing, natural language processing, and evidence-based learning to arrive at answers to questions posed to it by its users in plain English. Watson doesn’t just stop with hypotheses generation but goes ahead and proposes a list of recommendations to the user. Let’s pause and try to grasp what this means for a healthcare professional. Imagine a doctor typing in his/her iPad “A cyst found in the under-arm of the patient and biopsy suggesting non-Hodgkin's Lymphoma”. With so many cancers and alternative treatments available to treat them, to zero down on the right cure at the right time is a tough job for an oncologist. IBM Watson taps into the collective wisdom of Oncology experts - practitioners, researchers, and academicians across the globe to understand the latest advances happening inside the rapidly evolving field of Oncology. It then culls out information most relevant to the patient’s particular situation after considering their medical history. Within minutes, Watson then comes up with various tailored approaches that the doctor can adopt to treat his/her patient. Watson can help healthcare professionals narrow down on the right diagnosis, take informed and timely decisions and put in place treatment plans for their patients. All the doctor has to do is ask a question while mentioning the symptoms a patient is experiencing. This question-answer format is pretty revolutionary in that it can completely reshape how healthcare exists. How is IBM Watson redefining Healthcare? As more and more information is fed into IBM Watson, doctors will get highly customised recommendations to treat their patients. The impact on patient care and hospital cost can be tremendous. For Healthcare professionals, Watson can Reduce/Eliminate time spent by healthcare professionals on insight mining from an ever-growing body of research Provide a list of recommended options for treatment with a score of confidence attached Design treatment plans based on option chosen In short, it can act as a highly effective personal assistant to this group. This means these professionals are more competent, more successful and have the time and energy to make deep personal connections with their patients thereby elevating patient care to a whole different level. For patients, Watson can Act an interactive interface answering their queries and connecting them with their healthcare professionals Provide at home diagnostics and healthcare advice Keep their patient records updated and synced up with their hospitals Thus, Watson can help patients make informed medical choices, take better care of themselves and alleviate the stress and anxiety induced by a chaotic and opaque hospital environment. For the healthcare industry, it means a reduction in overall cost to hospitals, reduced investment in post-treatment patient care, higher rates of success, reduction in errors due to oversight, misdiagnosis, and other human errors. This can indirectly improve key administrative metrics, lower employee burnout/churn rate, improve morale and result in other intangible benefits more.   The implications of such a transformation are not limited to health care alone. What about Insurance, Watson? IBM Watson can have a substantial impact on insurance companies too. Insurance, a fiercely debated topic, is a major cost for healthcare. Increasing revenue potential, better customer relationship and reducing cost are some areas where Watson will start disrupting medical insurance. But that’s just the beginning. Tighter integration with hospitals, more data on patient care, and more information on newer remedies will provide ground-breaking insights to insurance companies. These insights will help them figure out the right premiums and the underwriting frameworks. Moreover, the above is not a scene set in some distant future. In Japan, insurance company Fukoku Mutual Life Insurance replaced 34 employees and deployed IBM Watson. Customers of Fukoku can now directly discuss with an AI robot instead of a human being to settle payments. Fukoku made a one-time fee of $1,70,000 along with yearly maintenance of $1,28,000 to IBM for its Watson’s services. They plan to recover this cost by replacing their team of sales personnel, insurance agents, and customer care personnel - potentially saving nearly a million dollars in annual savings. These are interesting times and some may even call it anxiety-inducing. [box type="shadow" align="" class="" width=""]Shariff: No, no, no, Houston, don't be anxious. Anxiety is bad for the heart.[/box]
Read more
  • 0
  • 0
  • 21239

article-image-7-popular-applications-of-artificial-intelligence-in-healthcare
Guest Contributor
26 Jun 2018
5 min read
Save for later

7 Popular Applications of Artificial Intelligence in Healthcare

Guest Contributor
26 Jun 2018
5 min read
With the advent of automation, artificial intelligence(AI), and machine learning, we hear about their applications regularly in news across industries. This has been especially true for healthcare where various hospitals, health insurance companies, healthcare units, etc. have been impacted in more substantial and concrete ways by AI when compared to other industries. In the recent years, healthcare startups and life science organizations have ventured into Artificial Intelligence technology and are one of the most heavily invested areas by VCs. Various organizations with ties to healthcare are leveraging the advances in artificial intelligence algorithms for remote patient monitoring, medical imaging and diagnostics, and implementing newly developed sophisticated methods, and applications into the system. Let’s explore some of the most popular AI applications which have revamped the healthcare industry. Proper maintenance and management of medical records Assembling, analyzing, and maintaining medical information and records is one of the most commonly used applications of AI. With the coming of digital automation, robots are being used for collecting and tracing data for proper data management and analysis. This has brought down manual labor to a considerable extent. Computerized medical consultation and treatment path The existence of medical consultation apps like DocsApp allows a user to talk to experienced and specialist doctors on chat or call directly from their phone in a private and secure manner. Users can report their symptoms into the app and this ensures the users are connected to the right specialist physicians as per the user’s medical history. This has been made possible due to the existence of AI systems. AI also aids in treatment design like analyzing data, making notes and reports from a patient’s file, thereby helping in choosing the right customized treatment as per the patient’s medical history. Eliminates monotonous manual labor Various medical tasks like analyzing X-Ray reports, test reports, CT scans and other common tasks can be executed by robots and other mechanical devices more accurately. Radiology is one such discipline wherein human supervision and control have dropped to a substantial level due to the extensive use of AI. Aids in drug manufacture and creation Generally, billions of dollars are spent on developing pharmaceuticals through clinical trials and they take almost a decade or two to manufacture a life-saving drug. But now, with the arrival of AI, the entire drug creation procedure has been simplified and has become pretty reasonable as well. Even in the recent outbreak of the Ebola virus, AI was used for drug discovery, to redesign solutions and to scan the current existing medicines to eradicate the plague. Regular health monitoring In the current era of digitization, there are certain wearable health trackers – like Garmin, Fitbit, etc. which can monitor your heart rate and activity levels. These devices help the user to keep a close check on their health by setting up their exercise plan, or reminding them to stay hydrated. All this information can also be shared with your physician to track your current health status through AI systems. Helps in the early and accurate detection of medical disorders AI helps in spotting carcinogenic and cardiovascular disorders at an early stage and also aids in predicting health issues that people are likely to contract due to hereditary or genetic reasons. Enhances medical diagnosis and medication management Medical diagnosis and medication management are the ultimate data-based problems in the healthcare industry. IBM’s Watson, a deep learning system has simplified medical investigation and is being applied to oncology, specifically for cancer diagnosis. Previously, human doctors used to collect patient data, research on it and conduct clinical trials. But with AI, the manual efforts have reduced considerably. For medication management, certain apps have been developed to monitor the medicines taken by a patient. The cellphone camera in conjunction with AI technology to check whether the patients are taking the medication as prescribed. Further, this also helps in detecting serious medical problems and tracking patients medicine adaptability and participants behavior in certain scientific trials. To conclude, we can connote that we are gradually embarking on the new era of cognitive technology with the power of AI-based systems. In the coming years, we can expect AI to transform every area of the healthcare industry that it brushes up with. Experts are constantly looking for ways and means to organize the existing structure and power up healthcare on the basis of new AI technology. The ultimate goals being to improve patient experience, build a better public health management and reduce costs by automating manual labor. Author Bio Maria Thomas is the Content Marketing Manager and Product Specialist at GreyCampus with eight years rich experience on professional certification courses like PMI- Project Management Professional, PMI-ACP, Prince2, ITIL (Information Technology Infrastructure Library), Big Data, Cloud, Digital Marketing and Six Sigma. Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions How IBM Watson is paving the road for Healthcare 3.0
Read more
  • 0
  • 0
  • 21218

article-image-android-o-whats-new-and-why-its-been-introduced
Raka Mahesa
07 May 2017
5 min read
Save for later

Android O: What's new and why it's been introduced

Raka Mahesa
07 May 2017
5 min read
Eclaire, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich, Jelly Bean, Kit Kat, Lollipop, Marshmallow, and Nougat. If you thought that was just a list of various sweet treats, well, you're not wrong, but it's also a list of Android version names. And if you guessed that the next version of Android starts with O, well you're exactly right because Google themselves have announced Android O – the latest version of Android.  So, what's new in the O version of Android? Let's find out.  Notifications have always been one of Android's biggest strengths. Notifications on Android are informative, versatile, and customizable so they fit their users' need. Google clearly understands this and has kept improving the notification system of Android. They have overhauled how the notifications look, made notifications more interactive, and given users a way to manage the importance of each notification. So, of course, for this version of Android, Google added even more features to the notification system.  The biggest feature added to the notification system on Android O is the Notification Channel. Basically, Notification Channel is an API that allows developers to define categories for notifications from their apps. App users will then be able to control the setting for each category of notifications. This way, users can fine tune applications so they only show notifications that the users think are important.  For example, let's say you have a chat application and it has 2 notification channels. The first channel is for notifying users when a new chat message arrives and the second one is for when the user is added to someone else's friend list. Some users may only care about the new chat messages, so they can turn off certain types of notifications instead of turning of all notifications from the app.  Other features added to Android O notification system is Notification Snoozing and Notification Timeout. Just like in alarm, Notification Snoozing allows the user to snooze a notification and let it reappear later when the user has time. Meanwhile, Notification Timeout allows developers to set a timeout duration for the notifications. Imagine that you want to notify a user about a flash sale that only runs for 2 hours. By adding timeout, the notification can remove itself when the event is over. Okay, enough about notifications – what else is new in Android O?  Autofill Framework  One of the newest things introduced with Android O is the Autofill Framework. You know how browsers can remember your full name, email address, home address, and other stuff and automatically fill in a registration form with that data? Well, the same capability is coming to Android apps via the Autofill Framework. An app can also register itself as an Autofill Service. For example, if you made a social media app, you can let other apps use the user's account data from your app to help users fill their forms.  Account data  Speaking of account data, with Android O, Google has removed the ability for developers to get user's account data using the GET_ACCOUNT permission, forcing developers to use the account chooser dialog instead. So with Android O, developers can no longer automatically fill in a text field with the user's email address and name, and have to let users pick accounts on their own.  And it's not just form filling that gets reworked. In an effort to improve battery life and phone performance, Android O has added a number of limitations to background processes. For example, on Android O, apps running in the background (that is, apps that don't have any of their interface visible to users) will not be able to get users’ location data as frequently as before. Also, apps in the background can no longer create and use background processes.  Do keep in mind that some of those limitations will impact any application running on Android O, not just apps that were built using the O version of the SDK. So if you have an app that relies on background processes, you may want to check your app to ensure it works fine on Android O.  App icons  Let's talk about something with more visual: App icons. You know how manufacturers add custom skins to their phones to differentiate their products from competitors? Well, some time ago they also changed the shape of all app icons to fit the overall UI of their phones and thisbroke some carefully designed icons. Fortunately, with the Adaptive Icon feature introduced in Android O, developers will be able to design an icon that can adjust to a variety of shapes.  We've covered a lot, but there are still tons of other features added to Android O that we haven't discussed, including: multi-display support, a new native Audio API, Keyboard Navigation, new APIs to manage WebView, new Java 8 APIs, and more. Do check out the official documentation for those.  That being said, we're still missing the most important thing: What is going to be the full name for Android O? I can only think of Oreo at the moment. What about you?  About the author  Raka Mahesa is a game developer at Chocoarts (chocoarts.com), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 21187

article-image-messaging-app-telegram-updated-privacy-policy-open-challenge
Amarabha Banerjee
08 Sep 2018
7 min read
Save for later

Messaging app Telegram's updated Privacy Policy is an open challenge

Amarabha Banerjee
08 Sep 2018
7 min read
Social media companies are facing a lot of heat presently because of their privacy issues. One of them is Facebook. The Cambridge analytica scandal had even prompted a senate hearing for Mark Zuckerberg. On the other end of this spectrum, there is another messaging app known as Telegram, registered in London, United Kingdom, founded by the Russian entrepreneur Pavel Durov. Telegram has been in the news for an absolutely opposite situation. It’s often touted as one of the most secure and secretive messaging apps. The end to end encryption ensures that security agencies across the world have a tough time getting access to any suspicious piece of information. For this reason Russia has banned the use of Telegram app on April 2018. Telegram updated their privacy policies on . These updates have further ensured that Telegram will retain the title of the most secure messaging application in the planet. It’s imperative for any messaging app to get access to our data. But how they choose to use it makes you either vulnerable or secure. Telegram in their latest update have stated that they process personal data on the grounds that such processing caters to the following two goals: Providing effective and innovative Services to our users To detect, prevent or otherwise address fraud or security issues in respect of their provision of Services. The caveat for the second point being the security interests shall not override the space of fundamental rights and freedoms that require protection of personal data. This clause is an excellent example on how applications can prove to be a torchbearer for human rights and basic human privacy amidst glaring loopholes. Telegram have listed the the kind of user data accessed by the app. They are as follows: Basic Account Data Telegram stores basic account user data that includes mobile number, profile name, profile picture and about information, which are needed  to create a Telegram account. The most interesting part of this is Telegram allows you to only keep your username (if you choose to) public. The people who have you in their contact list will see you as you want them to - for example you might be a John Doe in public, but your mom will still see you as ‘Dear Son’ in their contacts. Telegram doesn’t require your real name, gender, age or even your screen name to be your real name. E-mail Address When you enable 2-step-verification for your account or store documents using the Telegram Passport feature, you can opt to set up a password recovery email. This address will only be used to send you a password recovery code if you forget it. They are particular about not sending any unsolicited marketing emails to you. Personal Messages Cloud Chats Telegram stores messages, photos, videos and documents from your cloud chats on their servers so that you can access your data from any of your devices anytime without having to rely on third-party backups. All data is stored heavily encrypted and the encryption keys in each case are stored in several other data centers in different jurisdictions. This way local engineers or physical intruders cannot get access to user data. Secret Chats Telegram has a feature called Secret chats that uses end-to-end encryption. This means that all data is encrypted with a key that only the sender and the recipients know. There is no way for us or anybody else without direct access to your device to learn what content is being sent in those messages. Telegram does not store ‘secret chats’ on their servers. They also do not keep any logs for messages in secret chats, so after a short period of time there is no way of determining who or when you messaged via secret chats. Secret chats are not available in the cloud — you can only access those messages from the device they were sent to or from. Media in Secret Chats When you send photos, videos or files via secret chats, before being uploaded, each item is encrypted with a separate key, not known to the server. This key and the file’s location are then encrypted again, this time with the secret chat’s key — and sent to your recipient. They can then download and decipher the file. This means that the file is technically on one of Telegram’s servers, but it looks like a piece of random indecipherable garbage to everyone except for you and the recipient. This complete process is random and there random data packets are periodically purged from the storage disks too. Public Chats In addition to private messages, Telegram also supports public channels and public groups. All public chats are cloud chats. Like everything else on Telegram, the data you post in public communities is encrypted, both in storage and in transit — but everything you post in public will be accessible to everyone. Phone Number and Contacts Telegram uses phone numbers as unique identifiers so that it is easy for you to switch from SMS and other messaging apps and retain your social graph. But the most important thing is that permissions from the users are a must before the cookies are allowed into your browser. Cookies Telegram promises that the only cookies they use are those to operate and provide their Services on the web. They clearly state that they don’t use cookies for profiling or advertising. Their cookies are small text files that allow them to provide and customize their Services, and provide an enhanced user experience. Also, whether or not to use these cookies is a choice made by the users. So, how does Telegram remain in business? The Telegram business model doesn’t match that of a revenue generating service. The founder Pavel Durov is also the founder of the popular Russian social networking site VK. Telegram doesn’t charge for any messaging services, it doesn’t show ads yet. Some new in app purchase features might be included in the new version. As of now, the main source of revenue for Telegram are donations and mainly the earnings of Pavel Durov himself (from the social networking site VK). What can social networks learn from Telegram? Telegram’s policies elevate privacy standards that many are asking from other social messaging apps. The clamour for stopping the exploitation of user data, using their location details for targeted marketing and advertising campaigns is increasing now. Telegram shows that privacy can be achieved, if intended, in today’s overexposed social media world. But there is are also costs to this level of user privacy and secrecy, that are sometimes not discussed enough. The ISIS members behind the 2015 Paris attacks used Telegram to spread propaganda. ISIS also used the app to recruit the perpetrators of the Christmas market attack in Berlin last year and claimed credit for the massacre. More recently, a Turkish prosecutor found that the shooter behind the New Year’s Eve attack at the Reina nightclub in Istanbul used Telegram to receive directions for it from an ISIS leader in Raqqa. While these incidents can never negate the need for a secure and less intrusive social media platform like Telegram, there should be workarounds and escape routes designed for stopping extremists and terrorist activities. Telegram have assured that all ISIS messaging channels are deleted from their network which is a great way to start. Content moderation, proactive sentiment and pattern recognition and content/account isolation are the next challenges for Telegram. One thing is for sure, Telegram’s continual pursuance of user secrey and user data privacy is throwing an open challenge to others to follow suite. Whether others will oblige or not, only time will tell. To read about Telegram’s updated privacy policies in detail, you can check out the official Telegram Privacy Settings. How to stay safe while using Social Media Time for Facebook, Twitter and other social media to take responsibility or face regulation What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies
Read more
  • 0
  • 0
  • 21171
article-image-are-technical-skills-overrated-when-hiring-tech-pros
Hari Vignesh
13 Jul 2017
5 min read
Save for later

Are technical skills overrated when hiring tech pros?

Hari Vignesh
13 Jul 2017
5 min read
The giants of the tech industry have a lock on employer branding, but analysis of the personalities of their staff presents a different picture of what it’s really like in the trenches. A study published recently by Good&Co analyzed the psychometric data gained from anonymous personality quizzes completed by 4,364 tech employees, of what they believe are perceived as the five most innovative companies in Silicon Valley: Apple, Google, Facebook, Microsoft, and IBM. In total, the two-year-long study also analyzed 10 million responses from 250,000 users. Questions ranged from thoughts and feelings about networking to how they handle problems at work.  The study concluded that Facebook lags behind the others for cultivating a culture of creativity. Microsoft employees are more innovative than those at Apple. Both Apple and Twitter’s corporate cultures are the most accurate representation of how employees perceived them. Research from the Workforce Institute at Kronos and WorkplaceTrends indicated that part of the disconnect came from a dissonance between HR, management, and staff regarding who was in charge of creating and driving a company’s culture.  With respect to the technical professionals, the majority of the companies look for technical skills only. If it’s programming, they test them with many rounds. Only the survivors get the job offers. For tech people, technical skills are absolutely necessary, but there are other things that need equivalent validation. Attitude and mindset matters  There are those who are firmly in the “attitude camp.” They argue that you can teach skills but the attitude is a reflection of personality, something hardwired and difficult to both teach and change. They also share the view that even the most skilled and experienced employees will fail if they have a poor attitude.  Managers who hire for attitude believe it ensures you employ people who are better able to collaborate, receive feedback, adapt to workplace changes, and ‘muck in’ when times get tough. That in doing so, you will always hire people who will go the extra mile, and continually seek ways to improve. A lack of direct experience may be an asset  The new hire’s lack of direct experience may actually serve as an asset, because with less history to cloud their vision, they may see problems in a new way and from a fresh perspective. This fresh perspective may result in them generating many new ideas and innovations. Lessexperienced new hires may be willing to take more risks because they haven’t developed the fear level that often comes with extensive corporate experience.  A fresh perspective will also undoubtedly result in new hires questioning existing practices, and these inquiries may result in new approaches, ideas, and innovations. Because their lack of credentials in previous jobs may have increased the pressure on them to continually prove themselves, these lower-experienced new hires may have been forced to excel in other important areas including building relationships, creating stronger support networks, learning how to work harder, and how to bring a “find-a-way” approach to their work. Train ’em up, watch ’em go  So then, can you get it wrong when prioritizing attitude over skill? In short, yes. Inexperienced employees with excellent attitudes generally rise to the challenge but they do require training from more experienced and costly employees, and this takes time and money. Let’s face it, developing in-depth industry specific skill sets and critical thinking can take months, if not years. And what happens when you invest in this and they take off? Furthermore, many believe hiring for attitude rather than skill encourages bias, which can lead to discrimination and a lack of diversity in new hires. Is it healthy to have a team who all share the same attitudes? What can go wrong?  So what are the costs of getting it wrong? We all know hiring is no perfect science. In fact, it’s important to get it wrong, so you learn from your mistakes. And let’s face it, when highly experienced applicants are scarce, but you have an urgent requirement for the skill, it’s an easy trap to fall into, employing an applicant despite attitudinal ‘red flags’. If the choice is between leaving the position vacant for an unknown period of time versus managing the concerning characteristics, you can live with them, right?  Well, many believe “no”, you can’t. Hiring someone because they are “really qualified” but have a bad attitude can poison a workplace. In a very short space of time, they can undo all the hard work you have done to build a fantastic team and culture. Plus, you risk losing some of your best people, who won’t want to work with them. And what manager wants to spend time closely monitoring and managing a toxic team member? Final thoughts  When hiring tech pros, of course one would hope they have sufficient technical skills and foundation. But it is also imperative to ensure that your new hire has the right values and attitude for the company, in order to fit the culture. After all, if they have a great working attitude, probation period checks and training can ensure that their skills are consistently honed and maintained.  About the Author  Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades. 
Read more
  • 0
  • 0
  • 21018

article-image-will-ethereum-eclipse-bitcoin
Ashwin Nair
24 Oct 2017
8 min read
Save for later

Will Ethereum eclipse Bitcoin?

Ashwin Nair
24 Oct 2017
8 min read
Unless you have been living under a rock, you have most likely heard about Bitcoin, the world's most popular cryptocurrency that is growing by leaps and bounds. In fact, recently, Bitcoin broke the threshold of $6000 and is now priced at an all-time high. Bitcoin is not alone in this race as another cryptocurrency named Ethereum is hot on its heels. Despite being only three years old, Ethereum is quickly emerging as a popular choice especially among enterprise users. Ethereum’s YTD price growth has been more than a whopping 3000%. In terms of market cap as well Ethereum has shown a significant increase. Its overall share of the 'total cryptocurrency market' rose from 5% at the beginning of the year to 30% YTD.  In absolute terms, today it stands at around $28 Billion.  On the other hand, Bitcoin’s market cap as a percentage of the market has shrunk from 85% at the start of the year to 55%  and is valued at around $90 Billion. Bitcoin played a huge role in bringing Ethereum into existence. The co-creator and inventor of Ethereum, Vitalik Buterin, was only 19 when his father introduced him to bitcoin and by extension, to the fascinating world of cryptocurrency. In a span of 3 years, Vitalik had written several blogs on the topic and also co-founded the Bitcoin Magazine in 2011. Though Bitcoin served as an excellent tool for money transaction eliminating the need for banks, fees, or third party, its scripting language had limitations. This led to Vitalik, along with other developers, to found Ethereum - A platform that aimed to extend beyond Bitcoin’s scope and make internet decentralized. How Ethereum differs from the reigning cryptocurrency - Bitcoin Both Bitcoin and Ethereum are built on top of Blockchain technology allowing them to build a decentralized public network. However, Ethereum’s capability extends beyond being a cryptocurrency and differs from Bitcoin substantially in terms of scope and potential. Exploiting the full spectrum blockchain platform Bitcoin leverages Blockchain's distributed ledger technology to perform secured peer-to-peer cash transactions. It thus disrupted traditional financial transaction instruments such as PayPal. Meanwhile, Ethereum aims to offer much more than digital currency by helping developers build and deploy any kind of decentralized applications on top of Blockchain. The following are some Ethereum based features and applications that make it superior to bitcoin. DApps A decentralized app or DApp refers to a program running on the internet through a network but is not under the control of any single entity. A white paper on DApp highlights the four conditions that need to be satisfied to call an application a DApp: It must be completely open-source Data and records of operation must be cryptographically stored It should utilize a cryptographic token It must generate tokens The whitepaper also goes on to suggest that DApps are the future: “decentralized applications will someday surpass the world’s largest software corporations in utility, user-base, and network valuation due to their superior incentivization structure, flexibility, transparency, resiliency, and distributed nature.” Smart Contracts and EVM Another feature that Ethereum boasts over Bitcoin is a smart contract. A smart contract works like a traditional contract. You can use it to perform a task or transfer money in return for any asset or task in an efficient manner without needing interference from a middleman. Though Bitcoin is fast, secure, and saves cost it has limitations in terms of the ability to run operations. Ethereum solves this problem by allowing operations to work as a contract by converting them to pieces of code and have them supervised by a network of computers. A tool that helps Ethereum developers build and experiment with different contracts is Ethereum Virtual Machine. It acts as a testing environment to build blockchain operations and is isolated from the main network. Thus, it gives developers a perfect platform to build and test smart as well as robust contracts across different industries. DAOs One can also create Decentralized Autonomous Organizations (DAO) using Ethereum. DAO eliminates the need for human managerial involvement. The organization runs through smart contracts that convert rules, core tasks and structure of the organization to codes monitored by a fault-tolerant network. An example of DAO is Slock.it, a DAO version of Airbnb. Performance An important factor for cryptocurrency transaction is the amount of time it takes to finalize the transaction. This is called as Block Time. In terms of performance, the Bitcoin network takes 10 minutes to make a transaction whereas Ethereum is much more efficient and boasts a block time of just 14-15 seconds. Development Ethereum’s programming language Solidity is based on JavaScript. This is great for web developers who want to use their knowledge of JavaScript to build cool DApps and extend the Ethereum platform. Moreover, Ethereum is Turing complete, meaning it can compute anything that is computable provided enough resources are available. Bitcoin, on the other hand, is based on C++ which comparatively is not a popular choice among the new generation of app developers. Community and Vision One can say Bitcoin works as a DAO with no involvement of individuals in managing the cryptocurrency and is completely decentralized and owned by the community. Satoshi Nakamoto, who prefers to stay behind the curtains, is the only name that one comes across when it comes to relating an individual with Bitcoin. The community, therefore, lacks a figurehead when it comes to seeking future directions. Meanwhile, Vitalik Buterin is hugely popular amongst Ethereum enthusiasts and is very much involved in designing the future roadmap with other co-founders. Cryptocurrency Supply Similar to Bitcoin, Ethereum has Ether which works as a digital asset that fuels the network and transactions performed on the platform. Bitcoin has a fixed supply cap of around 21 million coins. It’s going to take more than 100 years to mine the last Bitcoin after which Bitcoin would behave as a deflationary cryptocurrency. Ethereum, on the other hand, has no fixed supply cap but has restricted its annual supply to 18 million Ethers. With no upper cap on the number of Ether that can be mined, Ethereum behaves as an inflationary currency and may lose value with time. However, the Ethereum community is now planning to move from proof-of-work to proof-of-stake model which should limit the number of ethers being mined and also offer benefits such as energy efficiency and security. Some real-world applications using Ethereum The Decentralized applications’ growth has been on the rise with people starting to recognize the value offered by Blockchain and decentralization such as security, immutability, tamper-proofing, and much more. While Bitcoin uses blockchain purely as a list of transactions, Ethereum manages to transfer value and information through its platform. Thus, it allows for immense possibilities when it comes to building different DApps across a wide range of industries. The financial domain is obviously where Ethereum is finding a lot of traction. Projects such as Branche - a Decentralized Consumer Micro­credit and Financial Services and Augur, a decentralized prediction market that has raised more than $ 5 million are some prominent examples. But financial applications are only the tip of the iceberg when it comes to possibilities that Ethereum offers and potential it holds when it comes disrupting industries across various sectors. Some other sectors where Ethereum is making its presence felt are: Firstblood is a decentralized eSports platform which has raised more than $5.5 million. It allows players to test their skills and bet using Ethereum while the tournaments are tracked on smart contracts and blockchain. Alice.Si a charitable trust that lets donors invest in noble causes knowing the fact that they only pay for causes where the charity makes an impact. Chainy is an Ethereum-based authentication and verification system that permanently stores records on blockchain using timestamping. Flippening is happening! If you haven’t heard of Flippening, it’s a term coined by cryptocurrency enthusiasts on Ethereum chances of beating Bitcoin to claim the number one spot to become the largest capitalized blockchain. Comparing Ethereum to Bitcoin may not be right as both serve different purposes. Bitcoin will continue to dominate cryptocurrency but as more industries adopt Ethereum to build Smart Contracts, DApps, or DAOs of their choice, its popularity is only going to grow, subsequently making Ether more valuable. Thus, the possibility of Ether displacing Bitcoin is strong. With the pace at which Ethereum is growing and the potential it holds in terms of unleashing Blockchain’s power to transform industries, it is definitely a question of when rather than if Flippening would happen!
Read more
  • 0
  • 0
  • 20994

article-image-self-service-analytics-changing-modern-day-businesses
Amey Varangaonkar
20 Nov 2017
6 min read
Save for later

How self-service analytics is changing modern-day businesses

Amey Varangaonkar
20 Nov 2017
6 min read
To stay competitive in today’s economic environment, organizations can no longer be reliant on just their IT team for all their data consumption needs. At the same time, the need to get quick insights to make smarter and more accurate business decisions is now stronger than ever. As a result, there has been a sharp rise in a new kind of analytics where the information seekers can themselves create and access a specific set of reports and dashboards - without IT intervention. This is popularly termed as Self-service Analytics. Per Gartner, Self-service analytics is defined as: “A  form of business intelligence (BI) in which line-of-business professionals are enabled and encouraged to perform queries and generate reports on their own, with nominal IT support.” Expected to become a $10 billion market by 2022, self-service analytics is characterized by simple, intuitive and interactive BI tools that have basic analytic and reporting capabilities with a focus on easy data access. It empowers business users to access relevant data and extract insights from it without needing to be an expert in statistical analysis or data mining. Today, many tools and platforms for self-service analytics are already on the market - Tableau, Microsoft Power BI, IBM Watson, Qlikview and Qlik Sense being some of the major ones. Not only have these empowered users to perform all kinds of analytics with accuracy, but their reasonable pricing, in-tool guidance and the sheer ease of use have also made them very popular among business users. Rise of the Citizen Data Scientist The rise in popularity of self-service analytics has led to the coining of a media-favored term - ‘Citizen Data Scientist’. But what does the term mean? Citizen data scientists are business users and other professionals who can perform less intensive data-related tasks such as data exploration, visualization and reporting on their own using just the self-service BI tools. If Gartner’s predictions are to be believed, there will be more citizen data scientists in 2019 than the traditional data scientists who will be performing a variety of analytics-related tasks. How Self-service Analytics benefits businesses Allowing the end-users within a business to perform their own analysis has some important advantages as compared to using the traditional BI platforms: The time taken to arrive at crucial business insights is drastically reduced. This is because teams don’t have to rely on the IT team to deliver specific reports and dashboards based on the organizational data. Quicker insights from self-service BI tools mean businesses can take decisions faster with higher confidence and deploy appropriate strategies to maximize business goals. Because of the relative ease of use, business users can get up to speed with the self-service BI tools/platform in no time and with very little training as compared to being trained on complex BI solutions. This means relatively lower training costs and democratization of BI analytics which in turn reduces the workload on the IT team and allows them to focus on their own core tasks. Self-service analytics helps the users to manage the data from disparate sources more efficiently, thus allowing organizations to be agiler in terms of handling new business requirements. Challenges in Self-service analytics While the self-service analytics platforms offer many benefits, they come with their own set of challenges too.  Let’s see some of them: Defining a clear role for the IT team within the business by addressing concerns such as: Identifying the right BI tool for the business - Among the many tools out there, identifying which tool is the best fit is very important. Identifying which processes and business groups can make the best use of self-service BI and who may require assistance from IT Setting up the right infrastructure and support system for data analysis and reporting Answering questions like - who will design complex models and perform high-level data analysis Thus, rather than becoming secondary to the business, the role of the IT team becomes even more important when adopting a self-service business intelligence solution. Defining a strict data governance policy - This is a critical task as unauthorized access to organizational data can be detrimental to the business. Identifying the right ‘power users’, i.e., the users who need access to the data and the tools, the level of access that needs to be given to them, and ensuring the integrity and security of the data are some of the key factors that need to be kept in mind. The IT team plays a major role in establishing strict data governance policies and ensuring the data is safe, secure and shared across the right users for self-service analytics. Asking the right kind of questions on the data - When users who aren’t analysts get access to data and the self-service tools, asking the right questions of the data in order to get useful, actionable insights from it becomes highly important. Failure to perform correct analysis can result in incorrect or insufficient findings, which might lead to wrong decision-making. Regular training sessions and support systems in place can help a business overcome this challenge. To read more about the limitations of self-service BI, check out this interesting article. In Conclusion IDC has predicted that spending on self-service BI tools will grow 2.5 times than spending on traditional IT-controlled BI tools by 2020. This is an indicator that many organizations worldwide and of all sizes will increasingly believe that self-service analytics is a feasible and profitable way to go forward. Today mainstream adoption of self-service analytics still appears to be in the early stages due to a general lack of awareness among businesses. Many organizations still depend on the IT team or an internal analytics team for all their data-driven decision-making tasks. As we have already seen, this comes with a lot of limitations - limitations that can easily be overcome by the adoption of a self-service culture in analytics, and thus boost the speed, ease of use and quality of the analytics. By shifting most of the reporting work to the power users,  and by establishing the right data governance policies, businesses with a self-service BI strategy can grow a culture that fuels agile thinking, innovation and thus is ready for success in the marketplace. If you’re interested in learning more about popular self-service BI tools, these are some of our premium products to help you get started:   Learning Tableau 10 Tableau 10 Business Intelligence Cookbook Learning IBM Watson Analytics QlikView 11 for Developers Microsoft Power BI Cookbook    
Read more
  • 0
  • 0
  • 20988
article-image-future-node
Owen Roberts
09 Sep 2016
2 min read
Save for later

The Future is Node

Owen Roberts
09 Sep 2016
2 min read
In the few years we’ve seen Node.js explode onto the tech scene and go from strength to strength – In fact, the rate of adoption has been so great that the Node Foundation has mentioned that in the last year alone we’ve seen the amount of developers using the server-side platform have grown by 100% to reach a staggering 3.5 million users. Early adopters to Nodehave included Netflix, PayPal, and even Walmart. The Node fanbase is constantly building new Node Package Managerpackages to share among themselves. With React and Angular offering the perfect accompaniment to Node in modern web applications, along with a host of JavaScript tools like Gulp and Grunt able to use Node’s best practices for easier development Node has become an essential tool for the modern JavaScript developer, one that shows no signs of slowing down or being replaced. Whether Node will be around a decade from now remains to seen, but with a hungry user base, thousands of user created npms already created, and full-stack JavaScript moving to be the cornerstone of most web applications, it’s definitely not going anyway anytime soon. For now, the future really is Node. Want to get started learning Node? Or perhaps you’re looking give your skills the boost to ensure you stay on top? We’ve just released Node.js Blueprints, and if you’re looking to see the true breadth of possibilities that Node offers you then there’s no better way to discover how to apply this framework in new and unexpected ways. But why wait? With a free Mapt account you can read the first 2 chapters for nothing at all! When you’re ready to continue learning just what Node can do, sign up to Mapt to get unlimited access to every chapter in the book, along with the rest of our entire video and eBook range, at just $29.99 per month!
Read more
  • 0
  • 0
  • 20885

article-image-five-biggest-challenges-information-security-2017
Charanjit Singh
08 Nov 2016
5 min read
Save for later

Five Biggest Challenges in Information Security in 2017

Charanjit Singh
08 Nov 2016
5 min read
Living in the digital age brings its own challenges. News of security breaches in well-known companies is becoming a normal thing. In the battle between those who want to secure the Internet and those who want to exploit its security vulnerabilities, here's a list of five significant security challenges that I think information security is/will be facing in 2017. Army of young developers Everyone's beloved celebrity is encouraging the population to learn how to code, and it's working. Learning to code is becoming easier every day. There are loads of apps and programs to help people learn to code. But not many of them care to teach how to write secure code. Security is usually left as an afterthought, an "advanced" topic to learn sometime in future. Even without the recent fame, software development is a lucrative career. It has attracted a lot of 9-to-5ers who just care about getting through the day and collecting their paycheck. This army of young developers who care little about the craft is most to blame when it comes to vulnerabilities in applications. It would astonish you to learn how many people simply don't care about the security of their applications. The pressure to ship and ever-slipping deadlines don't make it any better. Rise of the robots I mean IoT devices. Sorry, I couldn't resist the temptation. IoT devices are everywhere. "Internet of Things" they call it. As if Internet wasn't insecure enough already, it's on "things" now. Most of these things rarely have any concept of security. Your refrigerator can read your tweets, and so can your 13-year-old neighbor. We've already seen a lot of famous disclosures of cars getting hacked. It's one of the examples of how dangerous it can get. Routers and other such infrastructure devices are becoming smarter and smarter. The more power they get, the more lucrative they become for a hacker to attack them. Your computer may have a firewall and anti-virus and other fancy security software, but your router might not. Most people don't even change the default password for such devices. It's much easier for an attacker to simply control your means of connecting to the Internet than connecting to your device directly. On the other front, these devices can be (and have been) used as bots to launch attacks (like DDoS) elsewhere. Internet advertisements as malware The Internet economy is hugely dependent on advertisements. Advertisements is a big big business, but it is becoming uglier and uglier every day. As if tracking users all over the webs and breaching their privacy was not enough, advertisements are now used for spreading malware. Ads are very attractive to attackers as they can be used to distribute content on fully legitimate sites without actually compromising them. They've already been in the news for this very reason lately. So the Internet can potentially be used to do great damage. Mobile devices Mobile apps go everywhere you go. That cute little tap game you installed yesterday might result in the demise of your business. But that's just the tip of the iceberg. Android will hopefully add essential features to limit permissions granted to installed apps. New exploits are emerging everyday for vulnerabilities in mobile operating systems and even in the processor chips. Your company might have a secure network with every box checked, but what about the laptop and mobile device that Cindy brought in? Organizations need to be ever more careful about the electronic devices their employees bring into the premises, or use to connect to the company network. The house of security cards crumbles fast if attackers get access to the network through a legitimate medium. The weakest links If you follow the show Mr. Robot (you should, it's brilliant), you might remember a scene from the first Season when they plan to attack the "impenetrable" Steel Mountain. Quoting Elliot: Nothing is actually impenetrable. A place like this says it is, and it’s close, but people still built this place, and if you can hack the right person, all of a sudden you have a piece of powerful malware. People always make the best exploits. People are the weakest links in many technically secure setups. They're easiest to hack. Social engineering is the most common (and probably easiest) way to get access to an otherwise secure system. With the rise in advanced social engineering techniques, it is becoming crucial everyday to teach the employees how to detect and prevent such attacks. Even if your developers are writing secure code, it's doesn’t matter if the customer care representative just gives the password away or grants access to an attacker. Here's a video of how someone can break into your phone account with a simple call to your phone company. Once your phone account is gone, all your two-factor authentications (that depend on SMS-based OTPs) are worth nothing. About the author Charanjit Singh is a freelance JavaScript (React/Express) developer. Being an avid fan of functional programming, he’s on his way to take on Haskell/Purescript as his main professional languages.
Read more
  • 0
  • 0
  • 20820
Modal Close icon
Modal Close icon