Reader small image

You're reading from  Mastering Ubuntu Server - Fourth Edition

Product typeBook
Published inSep 2022
PublisherPackt
ISBN-139781803234243
Edition4th Edition
Concepts
Right arrow
Author (1)
Jay LaCroix
Jay LaCroix
author image
Jay LaCroix

Jeremy "Jay" LaCroix is a technologist and open-source enthusiast, specializing in Linux. He has a net field experience of 20 years across different firms as a Solutions Architect and holds a master's degree in Information Systems Technology Management from Capella University. In addition, Jay also has an active Linux-focused YouTube channel with over 250K followers and over 20M views, available at LearnLinuxTV, where he posts instructional tutorial videos and other Linux-related content. He has also written Linux Mint Essentials and Mastering Linux Network Administration, published by Packt Publishing.
Read more about Jay LaCroix

Right arrow

Deploying Ubuntu in the Cloud

Up until now, in each chapter, we’ve been working with an instance of Ubuntu installed on either a local virtual machine, a physical computer or server, or even a Raspberry Pi. We’ve learned how to deploy Ubuntu on such devices, and we’ve even gone as far as deploying virtual machines as well as containers. These on-premises devices have served us well, but the concept of cloud computing has become quite popular, even more so since the previous edition of this book. In this chapter, we’re going to take a look at running Ubuntu in the cloud. Specifically, we’ll deploy an Ubuntu instance on Amazon Web Services (AWS), which is a very popular platform for cloud computing. While we won’t go into extreme detail on AWS (it’s an extremely large and complex platform), you’ll definitely get a feel for what it’s like to deploy resources in the cloud, which will be more than enough to get you started...

Understanding the difference between on-premises and cloud infrastructure

As mentioned at the very beginning of this chapter, we’ve been solely utilizing on-premises Ubuntu installations thus far. Even if we’re running Ubuntu on a virtual machine in our data center, it’s still considered an on-premises installation even when it’s not on physical hardware. In short, an on-premises installation is something that resides locally with us, regardless of the type of server that serves as the foundation.

The first difference when it comes to cloud computing might be somewhat obvious: it’s the exact opposite of a resource being on-premises. With a cloud instance of Ubuntu, it’s someone else’s hardware that it runs on. Most of the time, we won’t know what kind of server a cloud instance is running on—when we subscribe to the services of a cloud provider and pay a fee to run a server on that platform, we’re able to access...

Important considerations when considering cloud computing as a potential solution

Before choosing to sign up with a provider, it’s important to first make sure that creating cloud resources is a good idea for you or your organization in the first place. Often, IT professionals can get so excited when it comes to a new trend that they may make the mistake of trying to use such a service even when it doesn’t make sense to do so. Above all, as an administrator, it’s important to utilize the best tool available for whatever it is that you wish to accomplish, instead of using a technology just because you’re excited about it. Cloud computing is awesome for sure, but for some use cases, it’s just not a good fit. This is similar to containers as well: containerization is an exciting technology but some applications just don’t run well on that platform. It takes trial and error.

There are some considerable benefits when it comes to cloud computing...

Becoming familiar with some basic AWS concepts

As discussed earlier, AWS is one of several competing cloud service providers. For the purpose of this chapter, AWS was chosen because more than any other provider, the platform requires an administrator to adopt a completely different mindset when it comes to managing infrastructure. This different mindset is a healthy one even outside of AWS, so it represents a logical evolution at this point in our journey.

Up until now, we’ve discussed server installations as essentially pets, meaning we want to keep them around, make sure they’re healthy, and if something goes wrong, try to fix it. We want to keep our servers operational for as long as possible. We want to be able to rely on them, and that helps our organization - customers and clients appreciate using a website or service that is stable, with minimal or no downtime.

That last part, minimal downtime, doesn’t change regardless of the mindset we use when...

Creating an AWS account

As mentioned in the previous section, a VPC within AWS represents a high-level abstraction of your overall network. All of the resources that we create will run inside a VPC. Therefore, we’ll need to create a VPC first before we can create an EC2 instance and deploy Ubuntu.

Before we can create a VPC though, we’ll need an AWS account. Before this chapter, I typically advised you to use whatever hardware you have available in order to create Ubuntu installations to work with the platform. This time, we’re going to utilize an actual cloud provider, which comes at a cost. While there are free components available for a limited time with a new account, it’s up to you, the reader, to keep track of billing. We’ll discuss costs in greater detail later in this chapter. But as a general rule of thumb for now, always use whatever the cheapest option is. If a free instance type is available, go with that. Of course, if you’...

Choosing a region

As discussed earlier, a VPC within AWS is the high-level abstraction of your overall network. You can have multiple VPCs, which is similar to the concept of managing several physical networks. In fact, we already have VPCs created for us in our account, so we won’t need to create one. In the future, keep in mind that creating additional VPCs is an option, should you ever need to have more than one. In our account, we have a default VPC in each Region, so choosing which one to utilize comes down to which region is most appropriate for our use.

For production use, you’ll want to create instances in AWS that are as close to your customer as you can get. For example, let’s say that the customers that your organization markets to are primarily located in the Eastern United States. There’s a region available within AWS that is available that’s labeled US East, so that would be an obvious choice in that scenario. You’re not...

Deploying Ubuntu as an AWS EC2 instance

With a great deal of discussion out of the way, it’s time to create an actual Ubuntu deployment in the cloud. This will allow us to see the AWS service in action and give us some working experience with the EC2 service. This requires two individual steps: the first to create a required IAM role and the second to create our instance. Let’s first make sure we understand the requirements of the IAM role, then we’ll set up the role and then create our new instance.

Setting up an IAM role for Session Manager

Session Manager is a service within AWS that we can use to access a command prompt for our instance. It’s actually part of Systems Manager and not its own service. If you want to access Session Manager, you will need to search for Systems Manager, and you’ll find Session Manager as a service underneath that. You’ll see this shortly.

Why should we use Session Manager? Just like with any other...

Creating and deploying Ubuntu AMIs

Just about every cloud platform I know of includes some sort of feature that can be used to create images of the instance’s hard disk. An image can be used to create copies of the original server, as well as acting as a starting point so if the server needs to be rebuilt, we won’t have to start over from scratch. In AWS, images are known as Amazon Machine Images (AMIs). For all intents and purposes, there’s nothing very unique about AMIs; if you’ve worked with disk images in the past, it’s the same thing. When it comes to what to include in an AMI, you can (and should) use your imagination here—anything you find yourself manually setting up or configuring while rolling out a new server is a candidate to be included in an image, and the more customizations you include inside the image, the more time it will save you later.

Let’s see this in action and create an image of the server we’ve just...

Automatically scaling Ubuntu EC2 deployments with Auto Scaling

If we maintain one or more servers for our organization, it’s hard to predict sometimes what the demand will be on that server. In the case of a popular news site, some articles may be more popular than others, and if something goes viral online, then requests to our site can increase by orders of magnitude in a short period of time. In the past, keeping up with customer demand was a very tedious process, one that may result in having to purchase an entirely new server with more powerful hardware. With our instance being in the cloud, we have more flexibility and can automate the process of bringing more servers online. And that’s exactly what we’re going to work on in this section.

Before we get started, keep in mind that we don’t actually have a popular server in AWS; we only have a simple test server that’s currently running Apache. We can simulate things to a point, but Auto Scaling...

Keeping costs down: understanding how to save money and make cost-effective decisions

As you just saw, there were many components and configurations we had to implement in order to build a load-balanced solution in AWS. As we grow our AWS infrastructure and implement more solutions, we should also keep an eye on our bill. Although we can utilize the free tier for now, production applications will likely need more powerful instances than what the free tier will provide, and the free tier itself won’t last forever. Not only that, but we should also know how to check how our bill is trending to make sure we don’t accidentally implement something that is expensive or waste money by running something we no longer need.

In this section, we’ll explore some concepts around billing. Although it’s beyond the scope of this chapter to do a complete deep dive into the world of billing, the subsections that follow will provide you with essential advice to help prevent...

Taking the cloud further: additional resources to grow your knowledge

AWS is a huge service, and we haven’t even scratched the surface of the platform in this chapter. We’ve just created a simple load-balanced application in an earlier section, and we’ll even learn how to automate creating cloud resources in the next chapter. But if you’ve found this chapter fun and want to work with AWS more and enhance your skills, I thought I’d provide some additional advice that will hopefully help you do that.

Online training and labs

There is quite a bit in the way of online resources to expand your knowledge. Some of these resources are free, such as a section of the AWS website that provides free hands-on training: https://aws.amazon.com/training/digital/

While you may already be aware of the value of YouTube when it comes to training videos, it’s a great source of knowledge. (And you may have even stumbled across my YouTube channel, over...

Summary

This chapter has been one of the most involved in the entire book so far, and you’ve accomplished a lot. During the course of this chapter, you learned about AWS, set up your own cloud server, set up Auto Scaling to ensure that your server is able to automatically heal from disasters, and even set up a load balancer to enable routing between multiple instances. Make sure you take some time to let all this knowledge sink in before continuing on, and I also recommend you spend some additional time with AWS before moving on to the next chapter.

Speaking of the next chapter, we’re going to work with AWS again, but this time, we’re going to focus on learning Terraform, which is an awesome tool that will enable us to automate the building of our cloud resources from the ground up. It’s going to be a lot of fun.

Further reading

Join our community on Discord

Join our community’s Discord space for discussions with the author and other readers:

https://packt.link/LWaZ0

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Mastering Ubuntu Server - Fourth Edition
Published in: Sep 2022Publisher: PacktISBN-13: 9781803234243
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime

Author (1)

author image
Jay LaCroix

Jeremy "Jay" LaCroix is a technologist and open-source enthusiast, specializing in Linux. He has a net field experience of 20 years across different firms as a Solutions Architect and holds a master's degree in Information Systems Technology Management from Capella University. In addition, Jay also has an active Linux-focused YouTube channel with over 250K followers and over 20M views, available at LearnLinuxTV, where he posts instructional tutorial videos and other Linux-related content. He has also written Linux Mint Essentials and Mastering Linux Network Administration, published by Packt Publishing.
Read more about Jay LaCroix