Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Cloud & Networking

65 Articles
article-image-top-10-it-certifications-for-cloud-and-networking-professionals-in-2018
Vijin Boricha
05 Jul 2018
7 min read
Save for later

Top 10 IT certifications for cloud and networking professionals in 2018

Vijin Boricha
05 Jul 2018
7 min read
Certifications have always proven to be one of the best ways to boost one’s IT career. Irrespective of the domain you choose, you will always have an upperhand if your resume showcases some valuable IT certifications. Certified professionals attract employers as certifications are an external validation that an individual is competent in that said technical skill. Certifications enable individuals to start thinking out of the box, become more efficient in what they do, and execute goals with minimum errors. If you are looking at enhancing your skills and increasing your salary, this is a tried and tested method. Here are the top 10 IT certifications that will help you in uprising your IT career. AWS Certified Solution Architect - Associate: AWS is currently the market leader in the public cloud. Packt Skill Up Survey 2018 confirms this too. Source: Packt Skill Up Survey 2018 AWS Cloud from Amazon offers a cutting-edge platform for architecting, building, and deploying web-scale cloud applications. With rapid adaptation of cloud platform the need for cloud certifications has also increased. IT professionals with some experience of AWS Cloud, interested in designing effective Cloud solutions opt for this certification. This exam promises to scale your ability of architecting and deploying secure and robust applications on AWS technologies. Individuals who fail to pass an exam must wait 14 days before they are eligible to retake the exam. There isn’t any attempt limit for this exam. AWS Certification passing scores depend on statistical analysis and are subject to change. Exam Fee: $150 Average Salary: $119,233 per annum Number of Questions: 65 Types of Question: MCQ Available Languages: English, Japanese AWS Certified Developer - Associate: This is another role-based AWS certification that has gained enough traction for industries to keep it as a job validator. This exam helps individuals validate their software development knowledge which helps them develop cloud applications on AWS. IT professionals with hands-on experience in designing and maintaining AWS-based applications should definitely go for this certification to stand-out. Individuals who fail to pass an exam must wait 14 days before they are eligible to retake the exam. There isn’t any attempt limit for this exam. AWS Certification passing scores depend on statistical analysis and are subject to change. Exam Fee: $150 Average Salary: $116,456 per annum Number of Questions: 65 Types of Question: MCQ Available Languages: English, Simplified Chinese, and Japanese Project Management Professional (PMP) Project management Professional is one of the most valuable certifications for project managers. The beauty of this certification is that it not only teaches individuals creative methodologies but makes them proficient in any industry domain they look forward to pursuing. The techniques and knowledge one gets from this certification is applicable in any industry globally. This certification promises that PMP certified project managers are capable of completing projects on time, in a desired budget and ensure meeting the original project goal. Exam Fee: Non-PMI Members: $555/ PMI Members: $405 Average Salary: $113,000 per annum Number of Questions: 200 Type of Question: A combination of Multiple Choice and Open-end Passing Threshold: 80.6% Certified Information Systems Security Professional (CISSP) CISSP is one of the globally recognized security certifications. This cybersecurity certification is a great way to demonstrate your expertise and build industry-level security skills. On achieving this certification users will be well-versed in designing, engineering, implementing, and running an information security program. Users need at least 5 years of minimum working experience in order to be eligible for this certification. This certification will help you measure your competence in designing and maintaining a robust environment. Exam Fee: $699 Average Salary: $111,638 per annum Number of Questions: 250 (each question carries 4 marks) Type of Question: Multiple Choice Passing Threshold: 700 marks CompTIA Security+ CompTIA Security+ certification is a vendor neutral certification used to kick-start one’s career as a security professional. It helps users get acquainted to all the aspects related to IT security. If you are inclined towards systems administration, network administration, and security administration, this is something that you should definitely go for. With this certification users learn the latest trends and techniques in risk management, risk mitigation, threat management and intrusion detection. Exam Fee: $330 Average Salary: $95,829 per annum Number of Questions: 90 Type of Question: Multiple Choice Available Languages: English (Japanese, Portuguese and Simplified Chinese estimated Q2 2018) Passing Threshold: 750/900 CompTIA Network+ Another CompTIA certification! Why? CompTIA Network+ is a certification that helps individuals in developing their career and validating their skills to troubleshoot, configure, and manage both wired and wireless networks. So, if you are an entry-level IT professional interested in managing, maintaining, troubleshooting and configuring complex network infrastructures then, this one is for you. Exam Fee: $302 Average Salary: $90,280 per annum Number of Questions: 90 Type of Question: Multiple Choice Available Languages: English (In Development: Japanese, German, Spanish, Portuguese) Passing Threshold: 720 (on a scale of 100-900) VMware Certified Professional 6.5 – Data Center Virtualization (VCP6.5-DCV) Yes, even today virtualization is highly valued in a lot of industries. Data Center Virtualization Certification helps individuals develop skills and abilities to install, configure, and manage a vSphere 6.5 infrastructure. This industry-recognized certification validates users’ knowledge on implementing, managing, and troubleshooting a vSphere V6.5 infrastructure. It also helps IT professionals build a  foundation for business agility that can accelerate the transformation to cloud computing. Exam Fee: $250 Average Salary: $82,342 per annum Number of Questions: 46 Available language: English Type of Question: Single and Multiple Choice Passing Threshold: 300 (on a scale of 100-500) CompTIA A+ Yet another CompTIA certification that helps entry level IT professionals have an upper hand. This certification is specially for individuals interested in building their career in technical support or IT operational roles. If you are thinking more than just PC repair then, this one is for you. By entry level certification I mean this is a certification that one can pursue simultaneously while in college or secondary school. CompTIA A+ is a basic version of Network+ as it only touches basic network infrastructure issues while making you proficient as per industry standards. Exam Fee: $211 Average Salary:$79,390 per annum Number of Questions: 90 Type of Question: Multiple Choice Available Languages: English, German, Japanese, Portuguese, French and Spanish Passing Threshold: 72% for 220-801 exam and 75% for 220-802 exam Cisco Certified Networking Associate (CCNA) Cisco Certified Network Associate (CCNA) Routing and Switching is one of the most important IT certifications to stay up-to date with your networking skills. It is a foundational certification for individuals interested in a high level networking profession. The exam helps candidates validate their knowledge and skills in networking, LAN switching, IPv4 and IPv6 routing, WAN, infrastructure security, and infrastructure management. This certification not only validates users networking fundamentals but also helps them stay relevant with skills needed to adopt next generation technologies. Exam Fee: $325 Average Salary:$55,166-$90,642 Number of Questions: 60-70 Available Languages: English, Japanese Type of Question: Multiple Choice Passing Threshold: 825/1000 CISM (Certified Information Security Manager) Lastly, we have Certified Information Security Manager (CISM), a nonprofit certification offered by ISACA that caters to security professionals involved in information security, risk management and governance. This is an advanced-level certification for experienced individuals who develop and manage enterprise information security programs. Only users who hold five years of verified experience, out of which 3 year of experience in infosec management, are eligible for this exam. Exam Fee: $415- $595 (Cheaper for members) Average Salary: $52,402 to $243,610 Number of Questions: 200 Passing Threshold: 450  (on a scale of 200-800) Type of Question: Multiple Choice Are you confused as to which certification you should take-up? Well, leave your noisy thoughts aside and choose wisely. Pick-up an exam that is inclined to your interest. If you want to pursue IT security don’t end-up going for Cloud certifications. No career option is fun unless you want to pursue it wholeheartedly. Take a right step and make it count. Why AWS is the prefered cloud platform for developers working with big data? 5 reasons why your business should adopt cloud computing Top 5 penetration testing tools for ethical hackers  
Read more
  • 0
  • 0
  • 25156

article-image-why-is-pentaho-8-3-great-for-dataops
Guest Contributor
07 Oct 2019
6 min read
Save for later

Why is Pentaho 8.3 great for DataOps?

Guest Contributor
07 Oct 2019
6 min read
Announced in July, Pentaho 8.3 is the latest version of the data integration and analytics platform software from Hitachi Vantara. Along with new and improved features, this version will support DataOps, a collaborative data management practice that helps customers access the full potential of their data. “DataOps is about having the right data, in the right place, at the right time and the new features in Pentaho 8.3 ensure just that,” said John Magee, vice president, Portfolio Marketing, Hitachi Vantara. “Not only do we want to ensure that data is stored at the lowest cost at the right service level, but that data is searchable, accessible and properly governed so actionable insights can be generated and the full economic value of the data is captured.” How Pentaho prevents the loss of data According to Stewart Bond, research director, Data Integration and Integrity Software, and Chandana Gopal, research director, Business Analytics Solutions from IDC, “A vast majority of data that is generated today is lost. In fact, only about 2.5% of all data is actually analyzed. The biggest challenge to unlocking the potential that is hidden within data is that it is complicated, siloed and distributed. To be effective, decision makers need to have access to the right data at the right time and with context.” The struggle is how to manage all the incoming data in a way that exposes everyone to what’s coming down the pipeline. When data is siloed, there’s no guarantee the right people are seeing it to analyze it. Pentaho Development is a single platform to help businesses keep up with data growth in a way that enables real-time data ingestion. With available data services, you can:   Make data sets immediately available for reports and applications.   Reduce the time needed to create data models.   Improve collaboration between business and IT teams.   Analyze results with embedded machines and deep learning models without knowing how to code them into data pipelines.   Prepare and blend traditional data with big data. Making all the data more accessible across the board is a key feature of Pentaho that this latest release continues to strengthen. What’s new in Pentaho 8.3? Latest version of Pentaho includes new features to support DataOps DataOps limits the overall cycle time of big data analytics. Starting from the initial origin of the ideas to the making of the visualization, the overall data analytics process is transformed with DataOps. Pentaho 8.3 is conceptualized to promote the easy management and collaboration of the data. The data analytics process is much more agile. Therefore, the data teams are able to work in sync. Also, efficiency and effectiveness are increased with DataOps. Businesses are looking for ways to transform the data digitally. They want to get more value from the massive pool of information. And, as data is almost everywhere, and it is distributed more than ever before, therefore, the businesses are looking for ways to get the key insights from the data quickly and easily. This is exactly where the role of Pentaho 8.3 comes into the picture. It accelerates the businesses’ innovation and agility. Plenty of new and exciting time-saving enhancements have been done to make Pentaho a better and more advanced solution for the corporates. It helps the companies to automate their data management techniques.  Key enhancements in Pentaho 8.3 Each enhancement included with Pentaho 8.3 in some way helps organizations modernize their data management practices in ways that assist with removing friction between data and insight, including: Improved drag and drop pipeline capabilities These help access and blend data that are hard to reach to provide deeper insights into the greater analytic value from enterprise integration. Amazon Web Services (AWS) developers can also now ingest and process streaming data through a visual environment rather than having to write code that must blend with other data. Enhanced data visibility Improved integration with Hitachi Content Platform (HCP), a distributed, object storage system designed to support large repositories of content, makes it easier for users to read, write and update HCP customer metadata. They can also more easily query objects with their system metadata, making data more searchable, governable, and applicable for analytics. It’s also now easier to trace real-time data from popular protocols like AMQP, JMS, Kafka, and MQTT. Users can also view lineage data from Pentaho within IBM’s Information Governance Catalog (IGC) to reduce the amount of effort required to govern data. Expanded multi-cloud support AWS Redshift bulk load capabilities now automate the process of loading Redshift. This removes the repetitive SQL scripting to complete bulk loads and allows users to boost productivity and apply policies and schedules for data onboarding. Also included in this category are updates that address Snowflake connectivity. As one of the leading destinations for cloud warehousing, Snowflake’s primary hiccup is when an analytics project wants to include data from other sources. Pentaho 8.3 allows blending, enrichment and the analysis of Snowflake data in conjunction with other sources, including other cloud sources. These include existing Pentaho-supported cloud platforms like AWS and Google Cloud. Pentaho and DataOps Each of the new capabilities and enhancements for this release of Pentaho are important for current users, but the larger benefit to businesses is its association with DataOps. Emerging as a collaborative data management discipline, focused on better communication, integration, and automation of how data flows across an organization, DataOps is becoming a practice embraced more often, yet not without its own setbacks. Pentaho 8.3 helps businesses gain the ability to make DataOps a reality without facing common challenges often associated with data management. According to John Magee, Vice President Portfolio Marketing at Hitachi,  “The new Pentaho 8.3 release provides key capabilities for customers looking to begin their DataOps journey.” Beyond feature enhancements Looking past the improvements and new features of the latest Pentaho release, it’s a good product because of the support it offers its community of users. From forums to webinars to 24/7 support, it not only caters to huge volumes of data on a practical level, but it doesn’t ignore the actual people using the product outside of the data. Author Bio James Warner is a Business Intelligence Analyst with Excellent knowledge on Hadoop/Big data analysis at NexSoftSys.com  New MapR Platform 6.0 powers DataOps DevOps might be the key to your Big Data project success Bridging the gap between data science and DevOps with DataOps
Read more
  • 0
  • 0
  • 24539

article-image-reasons-your-business-to-adopt-cloud-computing
Vijin Boricha
11 Jun 2018
6 min read
Save for later

5 reasons why your business should adopt cloud computing

Vijin Boricha
11 Jun 2018
6 min read
Businesses are moving their focus to using existing technology to accomplish their 2018 business targets. Although cloud services have been around for a while, many organisations hesitated to make the move. But recent enhancements such as cost-effectiveness, portability, agility, and faster connectivity have grabbed more attention from new and not so famous organisations. So, if your organization is looking for ways to achieve greater heights and you are exploring healthy investments that benefit your organisation then, your first choice should be cloud computing as the on-premises server system is fading away. You don’t need any proof to agree that cloud computing is playing a vital role in changing the way businesses work today. Organizations have started looking for cloud options to widen their businesses’ reach (read revenue, growth, sales) and to run more efficiently (read cost savings, bottom line, ROI). There are three major cloud options that growing businesses can look at: Public Cloud Private Cloud Hybrid Cloud A Gartner report states that by the year 2020 big vendors will shift from cloud-first to cloud-only policies. If you are wondering what could fuel this predicted rise in cloud adoption, look no further.Below are some factors contributing to this trend of businesses adopting cloud computing. Cloud offers increased flexibility One of the most beneficial aspects of adopting cloud computing is its flexibility no matter the size of the organization or the location your employee is placed at. Cloud computing comes with a wide range of options from modifying storage space to supporting both in-office and remotely located employees. This makes it easy for businesses to increase and decrease server loads along with providing employees with the benefit of working from anywhere at anytime with zero timezone restrictions. Cloud computing services, in a way, help businesses focus on revenue growth than spending time and resources on building hardware/software capabilities. Cloud computing is cost effective Cloud-backed businesses definitely benefit on cost as there is no need to maintain expensive in-house servers and other expensive devices given that everything is handled on the cloud. If you want your business to grow, you just need to spend on storage space and pay for the services you use. Cost transparency helps organizations plan their expenditure and pay-per-use is one of the biggest advantage businesses can leverage. With cloud adoption you eliminate spending on increasing processing power, hard drive space or building a large data center. When there are less hardware facilities to manage, you do not need a large IT team to handle it. Software licensing cost is automatically eliminated as the software is already stored on cloud and businesses have an option of paying as per their use. Scalability is easier with cloud The best part about cloud computing is its support for unpredicted requirements which helps businesses scale or downsize resources quickly and efficiently. It’s all about modifying your subscription plan which allows you to upgrade your storage or bandwidth plans as per your business needs.This kind of scalability option helps increasing business performance and minimizes the risk of up-front investments of operational issues and maintenance. Better availability means less downtime and better productivity So with cloud adoption you need not worry about downtime as they are reliable and maintain about 100% uptime. This means whatever you own on the cloud is available to your customers at any point. For every server breakdown, the cloud service providers make sure of having a backup server in place to avoid missing out on essential data. This can barely be achieved by traditional on-premises infrastructure, which is another reason businesses should switch to cloud. All of the above mentioned mechanism makes it easy to share files and documents with teammates, thanks to its flexible accessibility. Teams can collaborate more effectively when they can access documents anytime and anywhere. This obviously improves workflow and gives businesses a competitive edge. Being present is office to complete tasks is no longer a requirement for productivity;  a work/life balance is an added side-effect of such an arrangement. In short, you need not worry about operational disasters and you can get the job done without physically being present in office. Automated Backups One major problem with an on-premises data center is that everything depends on the functioning of your physical system. In cases where you lose your device or some kind of disaster befalls your physical system, it may lead to loss of data as well. This is never the case with cloud as you can access your files and documents from any device or location no matter the physical device you use. Organizations have to bear a massive expense for regular back-ups whereas cloud computing comes with automatic back-ups and provides enterprise grade functioning to all sizes of businesses. If you’re thinking about data security, cloud is a safer option as each of the cloud computing variants (private, public, and hybrid) has its own set of benefits. If you are not dealing with sensitive data, choosing public cloud would be the best option whereas for sensitive data, businesses should opt for private cloud where they have total control of the security policies. On the other hand, hybrid cloud allows you to benefit from both worlds. So, if you are looking for scalable solutions along with a much more controlled architecture for data security, hybrid cloud architecture will blend well with your business needs. It allows users to pick and choose the public or private cloud service they require to fulfill their business requirements. Migrating your business to cloud definitely has more advantages over disadvantages. It helps increase organizational efficiency and fuels business growth. Cloud computing helps reduce time-to-market, facilitates product development, keeps employees happy, and builds a desired workflow. This in the end helps your organisation achieve greater success. It doesn’t hurt that the cost you saved thus is available for you to invest in areas that are in dire need of some cash inflow! Read Next: What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Serverless computing wars: AWS Lambdas vs Azure Functions How machine learning as a service is transforming cloud
Read more
  • 0
  • 0
  • 24206

article-image-aiops-trick-or-treat
Bhagyashree R
31 Oct 2018
2 min read
Save for later

AIOps - Trick or Treat?

Bhagyashree R
31 Oct 2018
2 min read
AIOps, as the term suggests, is Artificial Intelligence for IT operations and was first introduced by Gartner last year. AIOps systems are used to enhance and automate a broad range of processes and tasks in IT operations with the help of big data analytics, machine learning, and other AI technologies. Read also: What is AIOps and why is it going to be important? In its report, Gartner estimated that, by 2020, approximately 50% of enterprises will be actively using AIOps platforms to provide insight into both business execution and IT Operations. AIOps has seen a fairly fast growth since its introduction with many big companies showing interest in AIOps systems. For instance, last month Atlassian acquired Opsgenie, an incident management platform that along with planning and solving IT issues, helps you gain insight to improve your operational efficiency. The reasons why AIOps is being adopted by companies are: it eliminates tedious routine tasks, minimizes costly downtime, and helps you gain insights from data that’s trapped in silos. Where AIOps can go wrong? AIOps alerts us about incidents beforehand, but in some situations, it can also go wrong. In cases where the event is unusual, the system will be less likely to predict it. Also, those events that haven’t occurred before will be entirely outside the ability for machine learning to predict or analyze. Additionally, it can sometimes give false negatives and false positives. False negatives could happen in the cases where the tests are not sensitive enough to detect possible issues. False positives can be the result of incorrect configuration. This essentially means that there will always be a need for human operators to review these alerts and warnings. Is AIOps a trick or treat? AIOps is bringing more opportunities for IT workforce such as AIOps Data Scientist, who will focus on solutions to correlate, consolidate, alert, analyze, and provide awareness of events. Dell defines its Data Scientist role as someone who will “contribute to delivering transformative AIOps solutions on their SaaS platform”. With AIOps, IT workforce won’t just disappear, it will evolve. AIOps is definitely a treat because it reduces manual work and provides an intuitive way of incident response. What is AIOps and why is it going to be important? 8 ways Artificial Intelligence can improve DevOps Tech hype cycles: do they deserve your attention?
Read more
  • 0
  • 0
  • 23894

article-image-5-things-to-remember-when-implementing-devops
Erik Kappelman
05 Dec 2017
5 min read
Save for later

5 things to remember when implementing DevOps

Erik Kappelman
05 Dec 2017
5 min read
DevOps is a much more realistic and efficient way to organize the creation and delivery of technology solutions to customers. But like practically everything else in the world of technology, DevOps has become a buzzword and is often thrown around willy-nilly. Let's cut through the fog and highlight concrete steps that will help an organization implement DevOps. DevOps is about bringing your development and operations teams together This might seem like a no-brainer, but DevOps is often explained in terms of tools rather than techniques or philosophical paradigms. At its core, DevOps is about uniting developers and operators, getting these groups to effectively communicate with each other, and then using this new communication to streamline various processes. This could include a physical change to the layout of an organization's workspace. It's incredible the changes that can happen just by changing the seating arrangements in an office. If you have a very large organization, development and operations might be in separate buildings, separate campuses, or even separate cities. While the efficacy of web-based communication has increased dramatically over the last few years, there is still no replacement for face-to-face daily human interactions. Putting developers and operators in the same physical space is going to increase the rate of adoption and efficacy of various DevOps tools and techniques. DevOps is all about updates Updates can be aimed at expanding functionality or simply fixing or streamlining existing processes. Updates present a couple of problems to developers and operators. First, we need to keep everybody working on the same codebase. This can be achieved by using a variety of continuous integration tools. The goal of continuous integration is to make sure that changes and updates to the codebase are implemented as close to continuously as possible. This helps avoid merging problems that can result from multiple developers working on the same codebase at the same time. Second, these updates need to be integrated into the final product. For this task, DevOps applies the concept of continuous deployment. This is essentially the same thing as continuous integration, but has to do with deploying changes to the codebase as opposed to integrating changes to the codebase. In terms of importance to the DevOps process, continues integration and deployment are equally important. Moving updates from a developer's workspace to the codebase to production should be seamless, smooth, and continuous. Implementing a microservices structure is imperative for an effective DevOps approach Microservices are an extension of the service-based structure. Basically a service structure calls for modulation of a solution’s codebase into units based on functionality. Microservices takes this a step further by implementing what consists of a service-based structure in which each service performs a single task. While a service-based or microservice structure is not required for implementation of DevOps, I have no idea why you wouldn’t because microservices lend themselves so well with DevOps. One way to think of a microservice structure is by imagining an ant hill in which all of the worker ants are microservices. Each ant has a specific set of abilities and is given a task from the queen. The ant then autonomously performs this task, usually gathering food, along with all of its ant friends. Remove a single ant from the pile, nothing really happens. Replace an old ant with a new ant, nothing really happens. The metaphor isn’t perfect, but it strikes at the heart of why microservices are valuable in a DevOps framework. If we need to be continuously integrating and deploying, shouldn’t we try to impact the codebase as directly as we can? When microservices are in use, changes can be made at an extremely granular level. This allows for continuous integration and deployment to really shine. Monitor your DevOps solutions In order to continuously deploy, applications need to also be continuously monitored. This allows for problems to be identified quickly. When problems are quickly identified, it tends to reduce the total effort required to fix the problems. Your application should obviously be monitored from the perspective of whether or not it is working as it currently should, but users need to be able to give feedback on the application’s functionality. When reasonable, this feedback can then be integrated into the application somehow. Monitoring user feedback tends to fall by the wayside when discussing DevOps. It shouldn’t. The whole point of the DevOps process is to improve the user experience. If you’re not getting feedback from users in a timely manner, it's kind of impossible to improve their experience. Keep it loose and experiment Part of the beauty of DevOps is that it can allow for more experimentation than other development frameworks. When microservices and continuous integration and deployment are being fully utilized, it's fairly easy to incorporate experimental changes to applications. If an experiment fails, or doesn’t do exactly what was expected, it can be removed just as easily. Basically, remember why DevOps is being used and really try to get the most out of it. DevOps can be complicated. Boiling anything down to five steps can be difficult but if you act on these five fundamental principles you will be well on your way to putting DevOps into practice. And while its fun to talk about what DevOps is and isn't, ultimately that's the whole point - to actually uncover a better way to work with others.
Read more
  • 0
  • 0
  • 23888

article-image-serverless-computing-aws-lambdas-azure-functions
Vijin Boricha
03 May 2018
5 min read
Save for later

Serverless computing wars: AWS Lambdas vs Azure Functions

Vijin Boricha
03 May 2018
5 min read
In recent times, local servers and on-premises computers are counted as old school. Users and organisations have shifted their focus on Cloud to store, manage, and process data. Cloud computing has evolved in ways that DevOps teams can now focus on improving code and processes rather than focusing on provisioning, scaling, and maintaining servers. This means we have now entered the Serverless era, and the big players of this era are AWS Lambda and Azure Functions. So if you are a developer now you need not worry about low-level infrastructure decision. Coming to the bigger question. What is Serverless Computing / Function-as-a-Service? Serverless Computing / Function-as-a-Service FaaS can be described as a concept of serverless computing where applications depend on third party services to manage server-side logics. This means application developers can concentrate on building their applications rather than thinking about servers. So if you want to build any type of application or backend service, just go about with it as everything required to run and scale your application is already being handled for you. Following are popular platforms that support Faas. AWS Lambda Azure Functions Cloud Functions Iron.io Webtask.io Benefits of Serverless Computing Serverless applications and architectures are gaining momentum and are increasingly being used by companies of all sizes. Serverless technology rapidly reduces production time and minimizes your costs, while you still have the freedom to customize your code, without hindering functionalities. For good reason, the serverless-based software takes care of many of the problems developers face when running systems and servers such as fault-tolerance, centralized logging, horizontal scalability, and deployments, to name a few. Additionally, the serverless pay-per-invocation model can result in drastic cost savings. Since AWS Lambda and Azure Functions are the most popular and widely used serverless computing platforms, we will discuss these services further. AWS Lambda AWS is recognized as one of the largest market leaders for cloud computing. One of the recent services within the AWS umbrella that has gained a lot of traction is AWS Lambda. It is the part of Amazon Web Services that lets you run your code without provisioning or managing servers. AWS Lambda is a compute service that enables you to deploy applications and back-end services that operate with zero upfront cost and requires no system administration. Although seemingly simple and easy to use, Lambda is a highly effective and scalable compute service that provides developers with a powerful platform to design and develop serverless event-driven systems and applications. Pros: Supports automatic scaling Support unlimited number of functions Takes 1 million requests for free, then charges $0.20/1 million invocations, plus $0.00001667/GB per sec Cons: Limited concurrent executions (1000 executions per account) Supports lesser languages in comparison to Azure (JavaScript, Java, C#, and Python) Azure Functions Microsoft provides a solution you can use to easily run small segments of code in the Cloud: Azure Functions. It provides solutions for processing data, integrating systems, and building simple APIs and microservices. Azure Functions help you easily run small pieces of code in cloud without worrying about a whole application or the infrastructure to run it. With Azure functions, you can use triggers to execute your code and bindings to simplify the input and output of your code. Pros: Supports unlimiter concurrent executions Supports C#, JavaScript, F#, Python, Batch, PHP, PowerShell Supports unlimited number of functions Takes 1 million requests for free, then charges $0.20/1 million invocations, plus $0.000016/GB per sec Cons: Manual scaling (App Service Plan) Conclusion When compared with the traditional Client-server approach, serverless architecture saves a lot effort and proves to cost effective for many organisations, no matter its size. The most important aspect of choosing the right platform is understanding which platform benefits your organisation the best. AWS Lambda has been around for a while with infinite support to Linux-based platforms but Azure Functions is not behind in supporting Windows-based suite even after entering the serveless market recently. If you are going to adopt AWS you will be to make the most of its; availability to open source integration, pay-as-you-go model, and high performance computing environment. Azure, on the other hand is easier to use as it’s a Windows platform. It also supports a precise pricing model where they charge by the minute and it has extended support for MacOS and Linux. So if you are looking for a clear winner here you shouldn't be surprised that AWS and Azure are similar in many ways and it would be a tie if it was to choose who is better or worse than the other. This battle would always be heated and experts will be placing their bets on who wins the race. In the end, the entire discussion would drill down to what your business needs. After all the mission would always be to grow your business at a marginal cost. The Lambda programming model How to Run Code in the Cloud with AWS Lambda Download Microsoft Azure serverless computing e-book for free
Read more
  • 0
  • 0
  • 23334
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-containers-end-of-virtual-machines
Vijin Boricha
13 Jun 2018
5 min read
Save for later

Are containers the end of virtual machines?

Vijin Boricha
13 Jun 2018
5 min read
For quite sometime now virtual machines (VMs) have gained a lot of traction. The major reason for this trend was IT industries were totally convinced about the fact that instead of having a huge room filled with servers, it is better to deploy all your workload on a single piece of hardware. There is no doubt that virtual machines have succeeded as they save a lot of cost and work pretty well making failovers easier. In a similar sense when containers were introduced they received a lot attention and have recently gained even more popularity amongst IT organisations. Well, there are a set of considerable reasons for this buzz; they are highly scalable, easy to use, portable, have faster execution and are mainly cost effective. Containers also subside management headaches as they share a common operating system. With this kind of flexibility it is quite easier to fix bugs, place update patches and make other alterations. All-in-all containers are lightweight and more portable than virtual machines. If all of this is true, are virtual machines going extinct? Well, for this answer you will have to deep dive into the complexities of both worlds. How Virtual Machines work? A virtual machine is an individual operating system installed on your usual operating system. The entire implementation is done by software emulation and hardware virtualization. Usually multiple virtual machines are used on servers where the physical machine remains the same but each virtual environment runs a completely separate service. Consider a Ubuntu server as a VM and use it to install all or any service you need. Now, if your deployment needs a set of software to handle web applications you provide all the necessary services to your application. Suddenly, there is a requirement for an additional service where your situation gets tighter, as all your resources are preoccupied. All you need to do is, install the new service on the guest virtual machine and you are all set to relax. Advantages of using virtual machines Multiple OS environments can run simultaneously on the same physical machine Easy to maintain, highly available, convenient recovery, and application provisioning Virtual machines tend to be more secure than containers Operating system flexibility on VMs is better than that of containers Disadvantages of using virtual machines Simultaneously running virtual machines may introduce an unstable performance, depending on the workload on the system by other running virtual machines Hardware accessibility becomes quite difficult when it comes to virtual machines Virtual machines are heavier in size taking up several gigabytes How Containers work? You can consider containers as lightweight, executable packages that provide everything an application needs to run and function as desired. A container usually sits on top of a physical server and its host OS allowing applications to run reliably in different environments by subtracting the operating system and physical infrastructure. So where VMs depend totally on hardware we have a new popular kid in town that requires significantly lesser hardware and does the task with ease and efficiency. Suppose you want to deploy multiple web servers faster, containers make it easier. The reason for this is, as you are deploying single services the containers require lesser hardware compared to virtual machines. The benefit of using containers does not end here. Docker, a popular container solution, creates a cluster of docker engines in such a way that they are managed as a single virtual system. So if you’re looking at deploying apps with scale, and lesser failovers your first preference should be containers. Advantages of using Containers You can any day add more computing workload on the same server as containers consume less resources Servers can load more containers than virtual machines as they are usually in megabytes Containers makes it easier to allocate resources to processes which helps running your applications in different environments Containers are cost effective solutions that help in decreasing both operating and development cost. Bug tracking and testing is easier in containers as there isn’t any difference in running your application locally, or on test servers, or in production Development, testing, and deployment time decreases with containers Disadvantages of using Containers Since containers share the kernel and other components of host operating system it become more vulnerable and can impact security of other containers as well Lack of operating system flexibility. Everytime you want to run a container on a different operating system you need to start a new server. Now coming to the original question. Are containers worth it? Will they eliminate virtualization entirely? Well, after reading this article you must have already guessed the clear winner considering the advantages over disadvantages of each platform. So, in virtual machines the hardware is virtualized to run multiple operating system instances. If one needs a complete platform that can provide multiple services then, virtual machines is your answer as it is considered a matured and a secure technology. If you're looking at achieving high scalability, agility, speed, lightweight, and portability, all this comes under just one hood, containers. With this standardised unit of software, one can stay ahead of the competition. If you still have concerns over security and how a vulnerable kernel can jeopardize the cluster than you need, DevSecOps is your knight in shining armor. The whole idea of DevSecOps is to bring operations and development together with security functions. In a nutshell, everyone involved in a software development life cycle is responsible for security. Kubernetes Containerd 1.1 Integration is now generally available Top 7 DevOps tools in 2018 What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 22904

article-image-alibaba-the-dark-horse-in-the-public-cloud-race
Gebin George
07 Jun 2018
3 min read
Save for later

Why Alibaba cloud could be the dark horse in the public cloud race

Gebin George
07 Jun 2018
3 min read
Public cloud market seems to be highly dominated by industry giants from the west like Amazon Web Services (AWS) and Microsoft Azure. One of China’s tech giants, Alibaba cloud has entered the public cloud market recently and seems to be catching up pretty quickly with its Software-as-a-Service (SaaS). Infrastructure-as-Service (IaaS), Platform-as-a-Service offerings. According to reports, in December 2017, Alibaba cloud witnessed 56% YoY growth and revenue as good as 12.8 billion USD. It is expected to be better in the Q2 2018 report, where-in the market size will increase to a sizeable amount. Alibaba cloud is already leading China’s cloud market share. It provides around 100 core services, with datacenter spread around 17 regions as a whole. Some of the stunning features of Alibaba cloud include: Elastic Computing ECS services of Alibaba cloud are highly scalable, quick and powerful with high-range Intel CPUs, which brings down the latency to give staggering results. It comes with extra security layer for protecting applications from DDoS and Trojan attacks. The services involved here includes ECS, container services, Autoscaling and so on. Networking Alibaba cloud enables you with hybrid and distributed network, ideal for enterprises which demand high network coverages. This network involves communication between two VPCs and communication between VPCs and IDCs. Security It has a built-in Anti-DDoS management and security assessment services. This definitely reduces the cost of hiring and training quality security engineers to analyze and manage security services and data breaches. Storage and CDN Alibaba cloud’s OSS (Object Storage Service) helps you store, backup, and archive huge amount of data on cloud. This service is absolutely flexible and you only need to pay as per your usage and there are no additional cost involved in it. Analytics It comprises of a wide range of analytics services like business analytics, data processing, stream analytics and so on. Services like Elastic MapReduce, Apache Hadoop and Apache Spark can be run easily on Alibaba cloud for efficient Cloud Analytics. For detailed products and services from Alibaba cloud, refer their official site. AWS and Azure were dominating the public cloud market with an array of services which changed as per the market requirements. Considering the current advancements in the Alibaba cloud and its affordable and highly competitive price range, Alibaba joins the others in the race to dominate the public cloud market. Microsoft Build 2018 Day 1: Azure meets Artificial Intelligence How to create your own AWS CloudTrail Google announce the largest overhaul of their Cloud Speech-to-Text
Read more
  • 0
  • 0
  • 22588

article-image-what-is-distributed-computing-and-whats-driving-its-adoption
Melisha Dsouza
07 Nov 2018
8 min read
Save for later

What is distributed computing and what's driving its adoption?

Melisha Dsouza
07 Nov 2018
8 min read
Distributed computing is having a real impact on the way companies look at the cloud. The "Most Promising Jobs 2018" report published by LinkedIn pointed out that distributed and cloud Computing rank amongst the top 10 most in-demand skills. What are the problems with centralized computing systems? Distributed computing solves many of the challenges that centralized computing systems pose today. These centralized systems - like IBM Mainframes - have been around for decades, but they’re beginning to lose favor. This is because centralized computing is ineffective and expensive in the context of increasing data and workloads. When you have a single central computer which controls a massive amount of computations - at the same time - it’s a massive strain on the system. Even one that’s particularly powerful. Centralized systems simply aren’t capable of processing huge volumes of transactional data and supporting tons of online users concurrently. There’s also a big issue with reliability. If your centralized server fails, all data could be permanently lost if you have no disaster recovery strategy. Fortunately, distributed computing offers solutions to many of these issues. How does distributed computing work? Distributed Computing comprises a group of systems located at different places, all connected over a network. They work on a single problem or a common goal. Each one of these systems is autonomous, programmable, asynchronous and failure-prone. These systems provide a better price/performance ratio when compared to a centralized system. This is because it’s more economical to add microprocessors rather than mainframes to your network. They have more computational power as compared to their centralized (mainframe) computing systems. Distributed computing and agility Another major plus point of distributed computing systems is that they provide much greater agility than centralized computing systems. Without centralization, organizations can add and change software and computational power according to the demands and needs of the business. With the reduction in price for computing power and storage thanks to the rise of public cloud services like AWS, organizations all over the world have begun using distributed systems and service-oriented architectures, like microservices. Distributed computing in action: Google search A perfect example of distributed computing in action is Google search. When a user submits a query, Google will use data from a number of different servers to deliver results, based on things like location, past searches, semantic keywords - and much, much more. These servers are located all around the world and are able to provide the search result in seconds or at time milliseconds. How cloud is driving the adoption of distributed computing Central to the adoption is the cloud. Today, cloud is mainstream and opens up the possibility of distributed systems to organizations in a number of different ways. Arguably, you’re not really seeing the full potential of cloud until you’ve moved to a distributed system. Let’s take a look at the different ways cloud services are helping companies feel confident enough to successfully leverage distributed computing. Infrastructure as a Service (IaaS) IaaS makes distributed systems accessible for many organizations by allowing them to host their infrastructure either internally on a private or public cloud. Essentially, they give an organization control over the operating system and platform that forms the foundation of their software infrastructure, but give an external cloud provider control over servers and virtualization technologies that make it possible to deploy that infrastructure. In the context of a distributed system, this means organizations have less to worry about. As you can imagine, without an IaaS, the process of developing and deploying a distributed system becomes much more complex and even costly. Platform as a Service: Custom Software on another Platform If IaaS effectively splits responsibilities between the organization and the cloud provider (the ‘service’), the platform as a Service (PaaS) ‘outsources’ even more to the cloud provider. Essentially, an organization simply has to handle the applications and data, leaving every other aspect of their infrastructure to the platform. This brings many benefits, and, in theory, should allow even relatively small engineering teams to take advantage of the benefits of a distributed system. The underlying complexity and heavy lifting that a distributed system brings rests with the cloud provider, allowing an organization’s engineers to focus on what matters most - shipping code. If you’re thinking about speed and innovation, then a PaaS opens that right up, provided your happy to allow your cloud provider to manage the bulk of your infrastructure. Software as a Service SaaS solutions are perhaps the clearest example of a distributed system. Arguably, given the way we use Saas today, it’s easy to forget that it can be a part of a distributed system. The concept is simple: it’s a complete software solution delivered to the end-user. If you’re trying to accomplish something particularly complex, something which you simply do not have the resources to do yourself, a SaaS solution could be effective. Users don’t need to worry about installing and maintaining software, they can simply access it via the internet   The biggest advantages of adopting a distributed computing system #1 Complete control on the system architecture Distributed computing opens up your options when it comes to system architecture. Although you might rely on an external cloud service for some resources (like compute or storage), the architectural decisions are ultimately yours. This means that you can make decisions based on exactly what your organization needs and how it works. In a sense, this is why distributed computing can bring you agility - but its not just about being agile in the strict sense, but also in a broader version of the word. It allows you to prioritize according to your own needs and demands. #2 Improve the “absolute performance” of the computing system Tasks can be partitioned into sub computations that can run concurrently. This, in turn, provides a total speedup of task completion. What’s more, if a particular site is currently overloaded with jobs, some of them can be moved to lightly loaded sites. This technique of ‘load sharing’ can boost the performance of your system. Essentially, distributed systems minimize the latency and response time while increasing the throughput. [caption id="attachment_23973" align="alignnone" width="1536"]  [/caption] #3  The Price to Performance ratio for the system Distributed networks offer a better price/performance ratio compared to centralized mainframe computers. This is because decentralized and modular applications can share expensive peripherals, such as high-capacity file servers and high-resolution printers. Similarly, multiple components can be run on nodes with specialized processing. This further reduces the cost of multiple specialized processing systems. #4 Disaster Recovery Distributed systems involve services communicating through different machines. This is where message integrity, confidentiality and authentication comes into play. In such a case, distributed computing gives organizations the flexibility to deploy a 4 way mechanism to keep operations secure: Encryption Authentication Authorization: Auditing: Another aspect of disaster recovery is reliability. If computation and the associated data effectively built into a single machine, and if that machine goes down, the entire service goes with it. With a distributed system, what could happen instead is that specific services might go down, but the whole thing should, in theory at least, stay standing. #5 Resilience through replication So, if specific services can go down within a distributed system, you still do need to do something to increase resilience. You do this by replicating services across multiple nodes, minimizing potential points of failure. This is what’s known as fault tolerance - it improves system reliability without affecting the system as a whole. It’s also worth pointing out that the hardware on which a distributed system is built is replaceable - this is better than depending on centralized hardware which, if it fails, will take everything with it… Another distributed computing example: SETI A good example of a distributed system is SETI. SETI collects massive amounts of data from observatories around the world on activity in the sky, in a bid to identify possible signs of extraterrestrial life. This information is then sliced into smaller pieces of data for easy analysis through distributed computing applications running as a screensaver on individual user PC’s, all around the world. The PC’s running the SETI screensaver will download a small file, and while a PC is unused, the screen saver downloads a data slice from SETI. It then runs the analytics application while the PC is idle, and when the analysis is complete, the analyzed data slice is uploaded back to SETI. This massive data analytics is possible all because of distributed computing. So, although distributed computing has become a bit of a buzzword, the technology is gaining traction in the minds of customers and service providers. Beyond the hype and debate, these services will ultimately help companies to be more responsive to market conditions while restraining IT costs. Cloudflare’s decentralized vision of the web: InterPlanetary File System (IPFS) Gateway to create distributed websites Oath’s distributed network telemetry collector- ‘Panoptes’ is now Open source! Intelligent Edge Analytics: 7 ways machine learning is driving edge computing adoption in 2018
Read more
  • 0
  • 0
  • 22108

article-image-is-middleware-dead-cloud-is-the-prime-suspect
Prasad Ramesh
17 Nov 2018
4 min read
Save for later

Is middleware dead? Cloud is the prime suspect!

Prasad Ramesh
17 Nov 2018
4 min read
The cloud is now a ubiquitous term, in use from tasks such as storing photos to remotely using machines for complex AI tasks. But has it killed on premises middleware setups and changed the way businesses manage their services used by their employees? Is middleware dead? Middleware is the bridge that connects an operating system to different applications in a distributed system. Essentially it is a transition layer of software that enables communication between OS and applications. Middleware acts as a pipe for data to flow from one application to another. If the communication between applications in a network is taken care of by this software, developers can focus on the applications themselves, hence middleware came into picture. Middleware is used in enterprise networks. Is middleware still relevant? Middleware was a necessity for an IT business before cloud was a thing. But as cloud adoption has become mainstream, offering scalability and elasticity, middleware has become less important in modern software infrastructures. Middleware in on premises setups was used for different uses such as remote calls, communication with other devices in the network, transaction management and database interactions. All of this is taken care of by the cloud service provider behind scenes. Middleware is largely in decline - with cloud being a key reason. Specifically, some of the reasons middleware has lost favor include: Middleware maintenance can be expensive and quickly deplete resources, especially if you’re using middleware on a large scale. Middleware can’t scale as fast as cloud. If you need to scale, you’ll need new hardware - this makes elasticity difficult, with sunk costs in your hardware resources. Sustaining large applications on the middleware can become challenging over time. How cloud solves middleware challenges The reason cloud is killing off middleware is because it can simply do things better than traditional middleware. In just about every regard, from availability to flexibility to monitoring, using a cloud service makes life much easier. It makes life easier for developers and engineers, while potentially saving organizations time in terms of resource management. If you’re making decisions about software infrastructure, it probably doesn’t feel like a tough decision. Even institutions like banks, that have traditionally resisted software innovation are embracing cloud. More than 80% of world’s largest banks and more than 85% of global banks opting for the cloud according to this Information Age article. When is middleware the right option? There might still be some life left in middleware yet. For smaller organizations, where an on premise server setup will be used for a significant period of time - with cloud merely a possibility on the horizon - middleware still makes sense. Of course, no organization wants to think of itself as ‘small’ - even if you’re just starting out, you probably have plans to scale. In this case, cloud will give you the flexibility that middleware inhibits. While you shouldn’t invest in cloud solutions if you don’t need them, it’s hard to think of a scenario where it wouldn’t provide an advantage over middleware. From tiny startups that need accessible and efficient hosting services, to huge organizations where scale is simply too big to handle alone, cloud is the best option in a massive range of use cases. Is middleware dead really? So yes, middleware is dead for most practical use case scenarios. Most companies go with the cloud given the advantages and flexibility. With upcoming options like multi-cloud which gives you the options to use different cloud services for different areas, there is even more flexibility in using the cloud. Think Silicon open sources GLOVE: An OpenGL ES over Vulkan middleware Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage MongoDB switches to Server Side Public License (SSPL) to prevent cloud providers from exploiting its open source code
Read more
  • 0
  • 0
  • 21910
article-image-keep-your-serverless-aws-applications-secure-tutorial
Savia Lobo
18 Jun 2018
11 min read
Save for later

Keep your serverless AWS applications secure [Tutorial]

Savia Lobo
18 Jun 2018
11 min read
Handling security is an extensive and complex topic. If not done right, you open up your app to dangerous hacks and breaches. Even if everything is right, it may be hacked. So it's important we understand common security mechanisms to avoid exposing websites to vulnerabilities and follow the recommended practices and methodologies that have been largely tested and proven to be robust. In this tutorial, we will learn how to secure serverless applications using AWS. Additionally, we will learn about the security basics and then move on to handle authorization and authentication using AWS. This article is an excerpt taken from the book, 'Building Serverless Web Applications' wriiten by Diego Zanon. Security basics in AWS One of the mantras of security experts is this: don't roll your own. It means you should never use in a production system any kind of crypto algorithm or security model that you developed by yourself. Always use solutions that have been highly used, tested, and recommended by trusted sources. Even experienced people may commit errors and expose a solution to attacks, especially in the cryptography field, which requires advanced math. However, when a proposed solution is analyzed and tested by a great number of specialists, errors are much less frequent. In the security world, there is a term called security through obscurity. It is defined as a security model where the implementation mechanism is not publicly known, so there is a belief that it is secure because no one has prior information about the flaws it has. It can be indeed secure, but if used as the only form of protection, it is considered as a poor security practice. If a hacker is persistent enough, he or she can discover flaws even without knowing the internal code. In this case, again, it's better to use a highly tested algorithm than your own. Security through obscurity can be compared to someone trying to protect their own money by burying it in the backyard when the common security mechanism would be to put the money in a bank. The money can be safe while buried, but it will be protected only until someone finds about its existence and starts to look for it. Due to this reason, when dealing with security, we usually prefer to use open source algorithms and tools. Everyone can access and discover flaws in them, but there are also a great number of specialists that are involved in finding the vulnerabilities and fixing them. In this section, we will discuss other security concepts that everyone must know when building a system. Information security When dealing with security, there are some attributes that need to be considered. The most important ones are the following: Authentication: Confirm the user's identity by validating that the user is who they claim to be Authorization: Decide whether the user is allowed to execute the requested action Confidentiality: Ensure that data can't be understood by third-parties Integrity: Protect the message against undetectable modifications Non-repudiation: Ensure that someone can't deny the authenticity of their own message Availability: Keep the system available when needed These terms will be better explained in the next sections. Authentication Authentication is the ability to confirm the user's identity. It can be implemented by a login form where you request the user to type their username and password. If the hashed password matches what was previously saved in the database, you have enough proof that the user is who they claim to be. This model is good enough, at least for typical applications. You confirm the identity by requesting the user to provide what they know. Another kind of authentication is to request the user to provide what they have. It can be a physical device (like a dongle) or access to an e-mail account or phone number. However, you can't ask the user to type their credentials for every request. As long as you authenticate it in the first request, you must create a security token that will be used in the subsequent requests. This token will be saved on the client side as a cookie and will be automatically sent to the server in all requests. On AWS, this token can be created using the Cognito service. How this is done will be described later in this chapter. Authorization When a request is received in the backend, we need to check if the user is allowed to execute the requested action. For example, if the user wants to checkout the order with ID 123, we need to make a query to the database to identify who is the owner of the order and compare if it is the same user. Another scenario is when we have multiple roles in an application and we need to restrict data access. For example, a system developed to manage school grades may be implemented with two roles, such as student and teacher. The teacher will access the system to insert or update grades, while the students will access the system to read those grades. In this case, the authentication system must restrict the actions insert and update for users that are part of the teachers group and users in the students group must be restricted to read their own grades. Most of the time, we handle authorization in our own backend, but some serverless services don't require a backend and they are responsible by themselves to properly check the authorization. For example, in the next chapter, we are going to see how serverless notifications are implemented on AWS. When we use AWS IoT, if we want a private channel of communication between two users, we must give them access to one specific resource known by both and restrict access to other users to avoid the disclosure of private messages. Confidentiality Developing a website that uses HTTPS for all requests is the main drive to achieve confidentiality in the communication between the users and your site. As the data is encrypted, it's very hard for malicious users to decrypt and understand its contents. Although there are some attacks that can intercept the communication and forge certificates (man-in-the-middle), those require the malicious user to have access to the machine or network of the victim user. From our side, adding HTTPS support is the best thing that we can do to minimize the chance of attacks. Integrity Integrity is related to confidentiality. While confidentiality relies on encrypting a message to prevent other users from accessing its contents, integrity deals with protecting the messages against modifications by encrypting messages with digital signatures (TLS certificates). Integrity is an important concept when designing low level network systems, but all that matters for us is adding HTTPS support. Non-repudiation Non-repudiation is a term that is often confused with authentication since both of them have the objective to prove who has sent the message. However, the main difference is that authentication is more interested in a technical view and the non-repudiation concept is interested in legal terms, liability, and auditing. When you have a login form with user and password input, you can authenticate the user who correctly knows the combination, but you can't have 100% certain since the credentials can be correctly guessed or stolen by a third-party. On the other hand, if you have a stricter access mechanism, such as a biometric entry, you have more credibility. However, this is not perfect either. It's just a better non-repudiation mechanism. Availability Availability is also a concept of interest in the information security field because availability is not restricted to how you provision your hardware to meet your user needs. Availability can suffer attacks and can suffer interruptions due to malicious users. There are attacks, such as Distributed Denial of Service (DDoS), that aim to create bottlenecks to disrupt site availability. In a DDoS attack, the targeted website is flooded with superfluous requests with the objective to overload the systems. This is usually accomplished by a controlled network of infected machines called a botnet. On AWS, all services run under the AWS Shield service, which was designed to protect against DDoS attacks with no additional charge. However, if you run a very large and important service, you may be a direct target of advanced and large DDoS attacks. In this case, there is a premium tier offered in the AWS Shield service to ensure your website's availability even in worst case scenarios. This requires an investment of US$ 3,000 per month, and with this, you will have 24x7 support of a dedicated team and access to other tools for mitigation and analysis of DDoS attacks. Security on AWS We use AWS credentials, roles, and policies, but security on AWS is much more than handling authentication and authorization of users. This is what we will discuss in this section. Shared responsibility model Security on AWS is based on a shared responsibility model. While Amazon is responsible for keeping the infrastructure safe, the customers are responsible for patching security updates to software and protecting their own user accounts. AWS's responsibilities include the following: Physical security of the hardware and facilities Infrastructure of networks, virtualization, and storage Availability of services respecting Service Level Agreements (SLAs) Security of managed services such as Lambda, RDS, DynamoDB, and others A customer's responsibilities are as follows: Applying security patches to the operating system on EC2 machines Security of installed applications Avoiding disclosure of user credentials Correct configuration of access policies and roles Firewall configurations Network traffic protection (encrypting data to avoid disclosure of sensitive information) Encryption of server-side data and databases In the serverless model, we rely only on managed services. In this case, we don't need to worry about applying security patches to the operating system or runtime, but we do need to worry about third-party libraries that our application depends on to execute. Also, of course, we need to worry about all the things that we need to configure (firewalls, user policies, and so on), the network traffic (supporting HTTPS) and how data is manipulated by the application. The Trusted Advisor tool AWS offers a tool named Trusted Advisor, which can be accessed through https://console.aws.amazon.com/trustedadvisor. It was created to offer help on how you can optimize costs or improve performance, but it also helps identify security breaches and common misconfigurations. It searches for unrestricted access to specific ports on your EC2 machines, if Multi-Factor Authentication is enabled on the root account and if IAM users were created in your account. You need to pay for AWS premium support to unlock other features, such as cost optimization advice. However, security checks are free. Pen testing A penetration test (or pen test) is a good practice that all big websites must perform periodically. Even if you have a good team of security experts, the usual recommendation is to hire a specialized third-party company to perform pen tests and to find vulnerabilities. This is because they will most likely have tools and procedures that your team may not have tried yet. However, the caveat here is that you can't execute these tests without contacting AWS first. To respect their user terms, you can only try to find breaches on your own account and assets, in scheduled time frames (so they can disable their intrusion detection systems for your assets), and only on restricted services, such as EC2 instances and RDS. AWS CloudTrail AWS CloudTrail is a service that was designed to record all AWS API calls that are executed on your account. The output of this service is a set of log files that register the API caller, the date/time, the source IP address of the caller, the request parameters, and the response elements that were returned. This kind of service is pretty important for security analysis, in case there are data breaches, and for systems that need the auditing mechanism for compliance standards. MFA Multi-Factor Authentication (MFA) is an extra security layer that everyone must add to their AWS root account to protect against unauthorized access. Besides knowing the user and password, a malicious user would also need physical access to your smartphone or security token, which greatly restricts the risks. On AWS, you can use MFA through the following means: Virtual devices: Application installed on Android, iPhone, or Windows phones Physical devices: Six-digit tokens or OTP cards SMS: Messages received on your phone We have discussed the basic security concepts and how to apply them on a serverless project. If you've enjoyed reading this article, do check out 'Building Serverless Web Applications' to implement signup, sign in, and log out features using Amazon Cognito. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Analyzing CloudTrail Logs using Amazon Elasticsearch How to create your own AWS CloudTrail
Read more
  • 0
  • 0
  • 21826

article-image-why-microservices-and-devops-are-match-made-heaven
Erik Kappelman
12 Oct 2017
4 min read
Save for later

Why microservices and DevOps are a match made in heaven

Erik Kappelman
12 Oct 2017
4 min read
What are microservices? In terms of software, ‘services’ could be thought of as little chunks of functionality. Services are a part of service-oriented architecture (SOA). Services are stateless, adhere to a contract (shared standards), are autonomous, relatively granular, and should be a ‘black-box’ for the user. Microservices are a logical extension of services. Microservices are services that perform only one function. This matches the Unix philosophy, “Do one thing, and do it well.” But who cares? And what about DevOps? Well, although the title is a cliche, DevOps and microservices based architecture are absolutely a match made in heaven. Following the philosophy of fully explaining terms, let's talk a bit about DevOps. What is DevOps? DevOps, which come from the words "development operations," is a process that is used to create software. DevOps is not one specific philosophy or process, there are many variants, but there are some shared features across most of the variants. DevOps advocates for a continuous development process where as many elements of this process are as automated as possible. Each iteration of a product is coded, built, tested, packaged, and released, and then monitored. This is referred to as the DevOps toolchain. When there is a need or desire to upgrade or change functionality or the way a product is designed, the process begins again. The idea is that DevOps should run in a circular fashion, always upgrading and always getting better. There are myriad tools in use right now within various flavors of DevOps. These tool are designed to meld with the DevOps tool chain and have revolutionized the development process for many developers and companies. Why microservices and DevOps go together You should already see why microservices and DevOps go together so well. DevOps calls for continuous monitoring, testing, and deployment of software. Microservices are inherently modular, because they are intended to perform a single function. Software that is modular easily fits into the DevOps structure. Incremental changes can be made to parts of a project, perhaps a single microservice. If the service contracts and control mechanisms are properly created, a single microservice should be able to be easily upgraded, built, tested, deployed and monitored without sending a cascading wave of bugs through adjacent services. DevOps really doesn’t make much sense outside of a structure like this. If your software is designed as a behemoth interconnected, interdependent ball of wax, changing part of the functionality will ‘break’ everything. This means that as changes or upgrades to a software are made, almost every change, no matter how big or small, will trigger what amounts to an almost full rewrite, or upgrade of the software in question. When applied to this kind of project, most DevOps processes would actually hinder the development process instead of helping. When projects are modularized at a relatively granular level, such as when a project is employing a microservice based structure, DevOps expedites delivery time and quality simultaneously. It should be noted that both a microservice architecture and DevOps processes are not tied to any specific tools or languages. These are philosophies for development and could be applied many different ways. That being said, there are many continuous integration and deployment tools, as well as, many different automation tools that are designed for use within a DevOps framework. Criticisms of microservices There are some criticisms of the microservice structure. One criticism is that using microservices does not get rid of the complexities of a traditional program. Those complexities are just moved onto the network that the services are using to communicate. Stress to the network is another criticism of the microservice architecture. This is because the service architecture distributes the elements of a program or process around a network in various places. In order for these services to perform cohesive functions, they must utilize the network to communicate. Depending on how many services make up a program or process, this could translate into significant network activity, which then creates problems of its own. Another criticism is that microservices can sometimes become, so called, ‘nanoservices.’ This is a service that performs a function so small that cost of the service actually outweighs the utility of the service. These criticisms should be kept in mind, but, in my opinion, they don’t amount to enough to impact the usual functionality of microservices in a DevOps environment. Using microservices in the DevOps process helps fully realize the potential of continuous integration, testing a deployment promised by DevOps. These tools combined can optimize computing in a manner that should be utilized during development whenever possible.
Read more
  • 0
  • 0
  • 21347

article-image-is-serverless-architecture-a-good-choice-for-app-development
Mehul Rajput
11 Oct 2019
6 min read
Save for later

Is serverless architecture a good choice for app development?

Mehul Rajput
11 Oct 2019
6 min read
App development has evolved rapidly in recent years. With new demands and expectations from businesses and users, trends like cloud have helped developers to be more productive and to build faster, more reliable and secure applications. But, there’s no end to evolution - and serverless is arguably the next step for application development. But is a serverless architecture the right choice? What is a Serverless Architecture? When you hear the word sever-less, you might assume that it means no servers. In actual fact it really refers to the elimination of the need to manage the servers. Instead, it shifts that responsibility to your cloud provider. Simply, it means that the constituent parts of an application are divided between multiple servers, with no need for the application owner/manager to create or manage the infrastructure that supports it. Instead of running off a server, with a serverless architecture, it runs off functions. These are essentially actions that are fired off to ensure things happen within the application. This is where the phrase ‘function-as-a-service’, or FaaS, (another way of describing serverless) comes from.  A recent report claims that the FaaS market is projected to grow up to 32.7% by 2021, by 7.72 billion US dollars. Is Serverless Architecture a Good Choice for App Development? Now that we’ve established what the serverless actually means, we must get to business. Is serverless architecture the right choice for app development? Well, it can work either way. It can be positive as well as negative. Here are some reasons: Using serverless for app development: the positives There are many reasons because of which serverless architecture can be good for app development and should be used. Some of them are discussed below: Decreasing costs Easier for service Scalability Third-party services Decreasing costs The most effective use of a serverless architecture in an app development process is that it reduces the costs of the work.It’s typically less expensive a ‘traditional’ server architecture. The reason is that on hardware servers, you have to pay for many different things that might not be  required. For example, you won’t have to pay for regular maintenance, the grounds, the electricity, and staff maintenance. Hence, you can save a considerable amount of money and use that for app quality as well. Easier for service It is a rational thought that when the owner or the app manager will not have to manage the server themselves, and a machine can do this job, then it won’t be as challenging to make the service accessible. As it will make the job more comfortable because it will not require supervision. Second, you will not have to spend time on it. Instead, you can use this time for productive work such as product development. Third, the service by this technology is reliable, and hence you can easily use it without much fear. Scalability Now another interestingly useful advantage of serverless architecture in app development is scalability. So, what is scalability? Well, it is the phenomenon by which a system handles an extra amount of work by adding resources to the system. It is the capability of an app or product to continue to work appropriately without disturbance when it is reformed in size or volume to meet any users need. So, serverless architecture act as the resource that is added to the system to handle any work that has piled up. Third-party services Another essential and useful feature of the serverless architecture is that, going this way you can use third-party services. Hence, your app can use any third-party service it requires other than what you already have. This way, the struggle needed to create the backend architecture of the app reduces. Additionally the third-party might provide us with better services than we already have. Hence, eventually, serverless architecture proves to be better as it provides the extent of a third-party. Serverless for app development: negatives Now we know all the advantages of a serverless architecture, it’s important to note that it can also it  bring some limitations and disadvantages. These are: Time restrictions Vendor lock-in Multi-tenancy Debugging is not possible Time restrictions As mentioned before, serverless architecture works on FaaS rules and has a time limit for running a function. This time limit is 300 seconds exactly. So, when this limit is reached, the function is stopped. Therefore, for more complex functions that require more time to execute, FaaS approach may not be a good choice. Although this problem can be tackled in a way that the problem is solved easily, to do this, we can split a task into several simpler functions if the task allows it. Otherwise, time restrictions like these can cause great difficulty. Vendor lock-in We have discussed that by using serverless architecture, we can utilize with third party services. Well, it can also go in the wrong way and cause vendor lock-in. If, for any reason, you decide to shift to a new service provider, in most cases services will be fulfilled in a different way. That means the productivity gains you expected from serverless will be lost as you will have to adjust and reconfigure the infrastructure to accept the new service. Multi-tenancy Multi-tenancy is an increasing problem in serverless architecture. The data of many tenants are kept quite near to each other. This can create  confusion. Some data might be exchanged, distributed, or probably lost. In turn, this can cause security and reliability issues. A customer could, for example, suddenly produce an extraordinarily high load which would affect other customers' applications. Debugging is not possible Debugging isn’t possible with serverless. As Serverless Architecture is a place where the data is being stored, it doesn’t have a debugging facility where the uploaded code can be debugged. If you want to know the function, run or perform it and wait for the result. The result can crash in the function and you cannot do anything about this. However, there is a way to resolve this problem, as well. You can use extensive logging which with every step being logged, decreases the chances of errors that cause debugging issues. Conclusion Serverless architecture certainly seems impressive in spite of having some limitations. There is no doubt that the viability and success of architectures depend on the business requirements and of course on the technology used. In the same way, serverless can sparkle bright if used in the appropriate case. I hope this blog might have helped you in the understanding of Serverless architecture for mobile apps and might be able to see it's both bright and dark sides. Author Bio Mehul Rajput is a CEO and co-founder of Mindinventory which specializes in Android and iOS app development and provide web and mobile app solutions from startup to enterprise level businesses. He is an avid blogger and writes on mobile technologies, mobile app, app marketing, app development, startup and business.   What is serverless architecture and why should I be interested? Introducing numpywren, a system for linear algebra built on a serverless architecture Serverless Computing 101 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 1 Modern Cloud Native architectures: Microservices, Containers, and Serverless – Part 2
Read more
  • 0
  • 0
  • 21339
article-image-a-serverless-online-store-on-aws-could-save-you-money-build-one
Savia Lobo
14 Jun 2018
9 min read
Save for later

A serverless online store on AWS could save you money. Build one.

Savia Lobo
14 Jun 2018
9 min read
In this article you will learn to build an entire serverless project of an AWS online store, beginning with a React SPA frontend hosted on AWS followed by a serverless backend with API Gateway and Lambda functions. This article is an excerpt taken from the book, 'Building Serverless Web Applications' written by Diego Zanon. In this book, you will be introduced to the AWS services, and you'll learn how to estimate costs, and how to set up and use the Serverless Framework. The serverless architecture of AWS' online store We will build a real-world use case of a serverless solution. This sample application is an online store with the following requirements: List of available products Product details with user rating Add products to a shopping cart Create account and login pages For a better understanding of the architecture, take a look at the following diagram which gives a general view of how different services are organized and how they interact: Estimating costs In this section, we will estimate the costs of our sample application demo based on some usage assumptions and Amazon's pricing model. All pricing values used here are from mid 2017 and considers the cheapest region, US East (Northern Virginia). This section covers an example to illustrate how costs are calculated. Since the billing model and prices can change over time, always refer to the official sources to get updated prices before making your own estimations. You can use Amazon's calculator, which is accessible at this link: http://calculator.s3.amazonaws.com/index.html. If you still have any doubts after reading the instructions, you can always contact Amazon's support for free to get commercial guidance. Assumptions For our pricing example, we can assume that our online store will receive the following traffic per month: 100,000 page views 1,000 registered user accounts 200 GB of data transferred considering an average page size of 2 MB 5,000,000 code executions (Lambda functions) with an average of 200 milliseconds per request Route 53 pricing We need a hosted zone for our domain name and it costs US$ 0.50 per month. Also, we need to pay US$ 0.40 per million DNS queries to our domain. As this is a prorated cost, 100,000 page views will cost only US$ 0.04. Total: US$ 0.54 S3 pricing Amazon S3 charges you US$ 0.023 per GB/month stored, US$ 0.004 per 10,000 requests to your files, and US$ 0.09 per GB transferred. However, as we are considering the CloudFront usage, transfer costs will be charged by CloudFront prices and will not be considered in S3 billing. If our website occupies less than 1 GB of static files and has an average per page of 2 MB and 20 files, we can serve 100,000 page views for less than US$ 20. Considering CloudFront, S3 costs will go down to US$ 0.82 while you need to pay for CloudFront usage in another section. Real costs would be even lower because CloudFront caches files and it would not need to make 2,000,000 file requests to S3, but let's skip this detail to reduce the complexity of this estimation. On a side note, the cost would be much higher if you had to provision machines to handle this number of page views to a static website with the same availability and scalability. Total: US$ 0.82 CloudFront pricing CloudFront is slightly more complicated to price since you need to guess how much traffic comes from each region, as they are priced differently. The following table shows an example of estimation: RegionEstimated trafficCost per GB transferredCost per 10,000 HTTPS requestsNorth America70%US$ 0.085US$ 0.010Europe15%US$ 0.085US$ 0.012Asia10%US$ 0.140US$ 0.012South America5%US$ 0.250US$ 0.022 As we have estimated 200 GB of files transferred with 2,000,000 requests, the total will be US$ 21.97. Total: US$ 21.97 Certificate Manager pricing Certificate Manager provides SSL/TLS certificates for free. You only need to pay for the AWS resources you create to run your application. IAM pricing There is no charge specifically for IAM usage. You will be charged only by what AWS resources your users are consuming. Cognito pricing Each user has an associated profile that costs US$ 0.0055 per month. However, there is a permanent free tier that allows 50,000 monthly active users without charges, which is more than enough for our use case. Besides that, we are charged for Cognito Syncs of our user profiles. It costs US$ 0.15 for each 10,000 sync operations and US$ 0.15 per GB/month stored. If we estimate 1,000 active and registered users with less than 1 MB per profile, with less than 10 visits per month in average, we can estimate a charge of US$ 0.30. Total: US$ 0.30 IoT pricing IoT charges starts at US$ 5 per million messages exchanged. As each page view will make at least 2 requests, one to connect and another to subscribe to a topic, we can estimate a minimum of 200,000 messages per month. We need to add 1,000 messages if we suppose that 1% of the users will rate the products and we can ignore other requests like disconnect and unsubscribed because they are excluded from billing. In this setting, the total cost would be of US$ 1.01. Total: US$ 1.01 SNS pricing We will use SNS only for internal notifications, when CloudWatch triggers a warning about issues in our infrastructure. SNS charges US$ 2.00 per 100,000 e-mail messages, but it offers a permanent free tier of 1,000 e-mails. So, it will be free for us. CloudWatch pricing CloudWatch charges US$ 0.30 per metric/month and US$ 0.10 per alarm and offers a permanent free tier of 50 metrics and 10 alarms per month. If we create 20 metrics and expect 20 alarms in a month, we can estimate a cost of US$ 1.00. Total: US$ 1.00 API Gateway pricing API Gateway starts charging US$ 3.50 per million of API calls received and US$ 0.09 per GB transferred out to the Internet. If we assume 5 million requests per month with each response with an average of 1 KB, the total cost of this service will be US$ 17.93. Total: US$ 17.93 Lambda pricing When you create a Lambda function, you need to configure the amount of RAM memory that will be available for use. It ranges from 128 MB to 1.5 GB. Allocating more memory means additional costs. It breaks the philosophy of avoiding provision, but at least it's the only thing you need to worry about. The good practice here is to estimate how much memory each function needs and make some tests before deploying to production. A bad provision may result in errors or higher costs. Lambda has the following billing model: US$ 0.20 per 1 million requests US$ 0.00001667 GB-second Running time is counted in fractions of seconds, rounding up to the nearest multiple of 100 milliseconds. Furthermore, there is a permanent free tier that gives you 1 million requests and 400,000 GB-seconds per month without charges. In our use case scenario, we have assumed 5 million requests per month with an average of 200 milliseconds per execution. We can also assume that the allocated RAM memory is 512 MB per function: Request charges: Since 1 million requests are free, you pay for 4 million that will cost US$ 0.80. Compute charges: Here, 5 million executions of 200 milliseconds each gives us 1 million seconds. As we are running with a 512 MB capacity, it results in 500,000 GB-seconds, where 400,000 GB-seconds of these are free, resulting in a charge of 100,000 GB-seconds that costs US$ 1.67. Total: US$ 2.47 SimpleDB pricing Take a look at the following SimpleDB billing where the free tier is valid for new and existing users: US$ 0.14 per machine-hour (25 hours free) US$ 0.09 per GB transferred out to the internet (1 GB is free) US$ 0.25 per GB stored (1 GB is free) Take a look at the following charges: Compute charges: Considering 5 million requests with an average of 200 milliseconds of execution time, where 50% of this time is waiting for the database engine to execute, we estimate 139 machine hours per month. Discounting 25 free hours, we have an execution cost of US$ 15.96. Transfer costs: Since we'll transfer data between SimpleDB and AWS Lambda, there is no transfer cost. Storage charges: If we assume a 5 GB database, it results in US$ 1.00, since 1 GB is free. Total: US$ 16.96, but this will not be added in the final estimation since we will run our application using DynamoDB. DynamoDB DynamoDB requires you to provision the throughput capacity that you expect your tables to offer. Instead of provisioning hardware, memory, CPU, and other factors, you need to say how many read and write operations you expect and AWS will handle the necessary machine resources to meet your throughput needs with consistent and low-latency performance. One read capacity unit represents one strongly consistent read per second or two eventually consistent reads per second, where objects have a size up to 4 KB. Regarding the writing capacity, one unit means that you can write one object of size 1 KB per second. Considering these definitions, AWS offers in the permanent free tier 25 read units and 25 write units of throughput capacity, in addition to 25 GB of free storage. It charges as follows: US$ 0.47 per month for every Write Capacity Unit (WCU) US$ 0.09 per month for every Read Capacity Unit (RCU) US$ 0.25 per GB/month stored US$ 0.09 GB per GB transferred out to the Internet Since our estimated database will have only 5 GB, we are on the free tier and we will not pay for transferred data because there is no transfer cost to AWS Lambda. Regarding read/write capacities, we have estimated 5 million requests per month. If we evenly distribute them, we will get two requests per second. In this case, we will consider that it's one read and one write operation per second. We need to estimate now how many objects are affected by a read and a write operation. For a write operation, we can estimate that we will manipulate 10 items on average and a read operation will scan 100 objects. In this scenario, we would need to reserve 10 WCU and 100 RCU. As we have 25 WCU and 25 RCU for free, we only need to pay for 75 RCU per month, which costs US$ 6.75. Total: US$ 6.75 Total pricing Let's summarize the cost of each service in the following table: ServiceMonthly CostsRoute 53US$ 0.54S3US$ 0.82CloudFrontUS$ 21.97CognitoUS$ 0.30IoTUS$ 1.01CloudWatchUS$ 1.00API GatewayUS$ 17.93LambdaUS$ 2.47DynamoDBUS$ 6.75TotalUS$ 52.79 It results in a total cost of ~ US$ 50 per month in infrastructure to serve 100,000 page views. If you have a conversion rate of 1%, you can get 1,000 sales per month, which means that you pay US$ 0.05 in infrastructure for each product that you sell. Thus, in this article you learned the serverless architecture of AWS online store also learned how to estimate its costs. If you've enjoyed reading the excerpt, do check out, Building Serverless Web Applications to monitor the performance, efficiency and errors of your apps and also learn how to test and deploy your applications. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Serverless computing wars: AWS Lambdas vs Azure Functions Using Amazon Simple Notification Service (SNS) to create an SNS topic
Read more
  • 0
  • 0
  • 21268

article-image-virtual-machines-vs-containers
Amit Kothari
17 Oct 2017
5 min read
Save for later

Virtual machines vs Containers

Amit Kothari
17 Oct 2017
5 min read
Virtual machines and containers are pretty similar, but they do possess some important differences. These differences will dictate which ones you decide to use. So, when you ask a question like 'virtual machines vs containers' there isn't necessarily going to be an outright winner - but there might be a winner for you in a given scenario. Let's take a look at what a virtual machine is, exactly, what a container is, and how they compare - as well as the key differences between the two. What is a virtual machine? Virtual machines are a product of hardware virtualization. They sit on top of physical machines with the hypervisor or virtual machine manager in between, acting as a layer of abstraction between the virtual machine and the underlying hardware. A virtualized physical machine can host multiple virtual machines, enabling better hardware utilization. Since the hypervisor abstracts the physical machine's hardware, it allows virtual machines to use a different operating system on the same host machine. The host operating system and virtual machine operating system run their own kernel. All the communication between the virtual machines and the host machine occurs through the hypervisor, resulting in high level of isolation. This means if one virtual machine crashes, it would not affect other virtual machines running on the same physical machine. Although the hypervisor's abstraction layer offers a high level of isolation, it also affects the performance. This problem can be solved by using a different virtualization technique. What is a container? Containers use lightweight operating system level virtualization. Similar to virtual machines, multiple containers can run on the same host machine. However, containers do not have their own kernel. They share the host machine's kernel, making them much smaller in size compared to virtual machines. They use process level isolation, allowing processes inside a container to be isolated from other containers. The difference between virtual machines and containers In his post Containers are not VMs, Mike Coleman use the analogy of houses and apartment buildings to compare virtual machines and containers. Self-contained houses have their own infrastructure while apartments are built around shared infrastructure. Similarly, virtual machines have their own operating system, with kernel, binaries, libraries etc. While containers share the host operating system kernel with other containers. Due to this, containers are much smaller in size allowing a physical machine to host more containers than virtual machines. Since containers use lightweight operating system level virtualization instead of a hypervisor, they are less resource intensive compared to virtual machines and offer better performance. Compared to virtual machines, containers are faster, quicker to provision, and easy to scale. As spinning a new container is quick and easy when a patch or an update is required, it is easy to start a new container and stop the old one instead of updating a running container. This allows us to build immutable infrastructure, which is reliable, portable and easy to scale. All of this makes containers a preferred choice for application deployment, especially with the teams that are using micro-services or similar architecture, where an application is composed of multiple small services instead of a monolith. In microservice architecture, an application is built as a suite of independent, self-contained services. This allows the teams to work independent of each other and deliver features quicker. However, decomposing applications into multiple parts adds operational complexity and overhead. Containers solve this problem. Containers can serve as a building block in the microservice world where each service can be packaged and deployed as a container. A container will have everything that is required to run a service, this includes service code, its dependencies, configuration files, libraries etc. Packaging a service and all its dependencies as a container makes it easy to distribute and deploy a service. Since the container includes everything that is required to run a service, it can be deployed reliably in different environments. A service packaged as a container will run the same way locally on a developer's machine, in a test environment, and in production. However, there are things to consider when using containers. Containers share the kernel and other components of the host operating system. This makes them less isolated compared to virtual machines, and thus less secure. Since each virtual machine has its own kernel, we can run virtual machines with a different operating system on the same physical machine. However since containers share the host operating system kernel, only the guest operating system that can work with the host operating system can be installed in a container. Virtual machines vs containers - in conclusion... Compared to virtual machines, containers are lightweight, performant and easy to provision. While containers seem to be the obvious choice to build and deploy applications, virtual machines have their own advantages. Compared to physical machines, virtual machines have the better tooling and are easier to automate. Virtual machines and containers can co-exist. Organizations with existing infrastructure built around virtual machines can take the benefits of containers by deploying them on virtual machines.
Read more
  • 0
  • 0
  • 19655
Modal Close icon
Modal Close icon