Introducing Cloud Native Architecture and Microservices
Here we go! Before we begin to build our application, we need to find answers to some of the following queries:
- What is cloud computing? What are its different types?
- What is microservices and its concept?
- What are the basic requirements for good to go?
In this chapter, we will focus on the different concepts that a developer or application programmer should understand before they start writing an application.
Let's first understand a bit about system building and how it evolves.
For a long time now, we have been discovering better approaches to constructing frameworks. With advances in new technologies and adoption of better approaches, the IT framework becomes more reliable and effective for clients (or customers), and makes engineers happy.
Continuous delivery helps us move our software development cycle into production, and lets us identify different error-prone perspectives of software, insisting on us the idea of considering every check-in to code as a suitable candidate to release it to production.
Our comprehension of how the web functions has driven us to grow better methods for having machines converse with other machines. The virtualization platform has permitted us to make arrangements and resize our machines freely, with foundation computerization giving us an approach to deal with these machines at scale. Some huge, effective cloud platforms, such as Amazon, Azure, and Google have embraced the perspective of little groups owning the full life cycle of their services. Concepts such as Domain-Driven Design (DDD), continuous delivery (CD), on-request virtualization, infrastructure robotization, small self-governing groups, and systems at scale are different traits, which effectively, and efficiently, get our software into production. And now, microservices has risen up out of this world. It wasn't developed or portrayed before the reality; it rose as a pattern, or, for example, from true utilization. All through this book, I will haul strands out of this earlier work to help illustrate how to fabricate, oversee, and advance microservices.
Numerous associations have found that by grasping fine-grained microservice structures, they can convey programming speedily, and grasp more up-to-date advancements. Microservices gives us, fundamentally, more flexibility to respond and settle on various choices, permitting us to react quickly to the unavoidable changes that affect every one of us.
Introduction to cloud computing
Before we begin with microservices and cloud native concepts, let's first understand what cloud computing is all about.
Cloud computing is a wide term that portrays a wide scope of administrations. Similarly, as with other huge advancements in innovation, numerous merchants have grabbed the expression cloud and are utilizing it for items that sit outside of the basic definition. Since the cloud is an expansive accumulation of administrations, associations can pick where, when, and how they utilize cloud computing.
The cloud computing services can be categorized as follows:
- SaaS: These are baked applications that are ready to be grasped by end users
- PaaS: These are a collection of tools and services that are useful for users/developers who want to either build their application or quickly host them directly to production without caring about the underlying hardware
- IaaS: This is for customers who want to build their own business model and customize it
Cloud computing, as a stack, can be explained as follows:
- Cloud computing is often referred to as stack, which is basically a wide range of services in which each service is built on top of another under a common term, such as cloud
- The cloud computing model is considered as a collection of different configurable computing resources (such as servers, databases, and storage), which communicate with each other, and can be provisioned with minimal supervision
The following diagram showcases the cloud computing stack components:
Let's understand cloud computing components in detail, along with their use cases.
Software as a Service
The following are the key points that describe SaaS:
Software as a Service (SaaS) offers users the ability to access software hosted on service provider premises, which is provided as a service over the internet through a web browser by a provider. These services are based on subscriptions, and are also referred to as on-demand software.
SaaS-offering companies include the Google Docs productivity suite, Oracle CRM (Customer Relationships Management), Microsoft and their Office 365 offering, and Salesforce CRM and QuickBooks.
SaaS can be further categorized as a vertical SaaS that focuses on the needs of specific industries, such as healthcare and agriculture, or a horizontal SaaS that focuses on the software industry, such as human resources and sales.
SaaS offerings are, basically, for organizations that quickly want to grasp existing applications that are easy to use and understand, even for a non-technical person. Based on the organization's usage and budget, enterprises to select support plans. Additionally, you can access these SaaS applications from anywhere around the globe, and from any device with the internet enabled.
Platform as a Service
The following are the key points that describe PaaS:
In PaaS offerings, the organization/enterprise need not worry about hardware and software infrastructure management for their in-house applications
The biggest benefits of PaaS are for the development teams (local or remote), which can efficiently build, test, and deploy their applications on a common framework, wherein, the underlying hardware and software is managed by the PaaS service provider
The PaaS service provider delivers the platform, and also provides different services around the platform
The examples of PaaS providers include Amazon Web Services (AWS Elastic Beanstalk), Microsoft Azure (Azure Websites), Google App Engine, and Oracle (Big Data Cloud Service)
Infrastructure as a Service
The following are the key points that describe IaaS:
Unlike SaaS offerings, in IaaS, the customer is provided with IT resources, such as bare metal machines to run applications, hard disk for storage, and network cable for network capability, which they can customize based on their business model.
In IaaS offerings, since the customer has full access to their infrastructure, they can scale their IT resources based on their application requirement. Also, in IaaS offerings, the customer has to manage the security of the application/resources, and needs to build disaster recovery models in case of sudden failures/crashes.
In IaaS, services are on an on-demand basis, where the customer is charged on usage. So, it's the customer's responsibility to do cost analysis against their resources, which will help restrict them from exceeding their budget.
It allows customers/consumers to customize their infrastructure based on the requirements of the application, then tear down the infrastructure and recreate it again very quickly and efficiently.
The pricing model for IaaS-based services is basically on-demand, which means you pay as you go. You are charged as per your usage of resources and the duration of the usage.
Amazon Web Services (offering Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3)) was the first out of the gate in this cloud offering; however, players such as Microsoft Azure (virtual machine), Rackspace (virtual cloud servers) and Oracle (bare metal cloud services) have also made a name for themselves.
The cloud native concepts
Cloud native is structuring teams, culture, and technology to utilize automation and architectures to manage complexity and unlock velocity.
The cloud native concept goes beyond the technologies with which it is associated. We need to understand how companies, teams, and people are successful in order to understand where our industry is going.
Currently, companies such as Facebook and Netflix have dedicated a large amount of resources working towards cloud native techniques. Even now, small and more flexible companies have realized the value of these techniques.
With feedback from the proven practices of cloud native, the following are some of the advantages that come to light:
Result-oriented and team satisfaction: The cloud native approach shows the way to break a large problem into smaller ones, which allows each team to focus on the individual part.
Grunt work: Automation reduces the repetitive manual tasks that cause operations pain, and reduces the downtime. This makes your system more productive, and it gives more efficient outcomes.
Reliable and efficient application infrastructure: Automation brings more control over deployment in different environments--whether it is development, stage, or production--and also handles unexpected events or failures. Building automation not only helps normal deployment, but it also makes deployment easy when it comes to a disaster recovery situation.
Insights over application: The tools built around cloud native applications provide more insights into applications, which make them easy to debug, troubleshoot, and audit.
Efficient and reliable security: In every application, the main concern is toward its security, and making sure that it is accessible via required channels with authentication. The cloud native approach provides different ways for the developer to ensure the security of the application.
Cost-effective system: The cloud approach to managing and deploying your application enables efficient usage of resources, which also includes application release and, hence, makes the system cost effective by reducing the wastage of resources.
Cloud native - what it means and why it matters?
Cloud native is a broad term which makes use of different techniques, such as infrastructure automation, developing middleware, and backing services, which are basically a part of your application delivery cycle. The cloud native approach includes frequent software releases that are bug-free and stable, and can scale the application as per the business requirement.
Using the cloud native approach, you will be able to achieve your goal toward application building in a systematic manner.
The cloud native approach is much better than the legacy virtualization-oriented orchestration, which needs a lot of effort to build an environment suitable for development, and then, a far more different one for the software delivery process. An ideal cloud native architecture should have automation and composition functionalities, which work on your behalf. These automation techniques should also be able to manage and deploy your application across different platforms and provide you with results.
There are a couple of other operation factors that your cloud native architecture should be able to identify, such as steady logging, monitoring application and infrastructure in order to make sure the application is up and running.
The cloud native approach really helps developers build their application across different platforms using tools such as Docker, which is lightweight and easy to create and destroy.
The cloud native runtimes
Containers are the best solutions for how to get software to run reliably when moved from one computing environment to another. This could be from one developer machine to the stage environment into production, and perhaps from a physical machine to a virtual machine in a private or public cloud. Kubernetes has become synonymous with container services, and is getting popular nowadays.
With the rise of cloud native frameworks and an increase in the applications built around it, the attributes of container orchestration have received more attention and usage. Here is what you need from a container runtime:
- Managing container state and high availability: Be sure to maintain the state (such as create and destroy) of containers, specifically in production, as they are very important from a business perspective, and should be able to scale as well, based on business needs
- Cost analysis and realization: Containers give you control over resource management as per your business budget, and can reduce costs to a large extent
- Isolated environment: Each process that runs within a container should remain isolated within that container
- Load balancing across clusters: Application traffic, which is basically handled by a cluster of containers, should be redirected equally within the containers, which will increase the applications response and maintain high availability
- Debugging and disaster recovery: Since we are dealing with the production system here, we need to make sure we have the right tools to monitor the health of the application, and to take the necessary action to avoid downtime and provide high availability
Cloud native architecture
The cloud native architecture is similar to any application architecture that we create for a legacy system, but in the cloud native application architecture, we should consider a few characteristics, such as a twelve-factor application (collection of patterns for app development), microservices (decomposition of a monolithic business system into independent deployable services), self-service agile infrastructure (self-service platform), API-based collaboration (interaction between services via API), and antifragility (self-realizing and strengthening the application).
First, let's discuss what is microservices all about?
Microservices is a broader term that breaks large applications into smaller modules to get them developed and make them mature enough for release. This approach not only helps to manage each module efficiently, but it also identifies the issue at the lower level itself. The following are some of the key aspects of microservices:
- User-friendly interfaces: Microservices enable a clear separation between microservices. Versioning of microservices enables more control over APIs, and it also provides more freedom for both the consumers and producers of these services.
- Deployment and management of APIs across the platform: Since each microservice is a separate entity, it is possible to update a single microservice without making changes to the others. Also, it is easier to roll back changes for a microservice. This means the artifacts that are deployed for microservices should be compatible in terms of API and data schemas. These APIs must be tested across different platforms, and the test results should be shared across different teams, that is, operation, developers, and so on, to maintain a centralized control system.
- Flexibility in application: Microservices that are developed should be capable of handling the request and must respond back, irrespective of the kind of request, which could be a bad input or an invalid request. Also, your microservice should be able to deal with an unexpected load request and respond appropriately. All of these microservices should be tested independently, as well as with integration.
- Distribution of microservices: It's better to split the services into small chunks of services so that they can be tracked and developed individually and combined to form a microservice. This technique makes microservices development more efficient and stable in manner.
The following diagram shows a cloud native application's high-level architecture:
The application architecture should ideally start with two or three service, try to expand it with further versions. It is very important to understand application architecture, as it may need to integrate with different components of the system, and it is possible that a separate team manages those components when it comes to large organizations. Versioning in microservices is vital, as it identifies the supported method during the specified phase of development.
Are microservices a new concept?
Microservices has been in the industry for a very long time now. It is another way of creating a distinction between the different components of a large system. Microservices work in a similar fashion, where they act as a link between the different services, and handle the flow of data for a particular transaction based on the type of requests.
The following diagram depicts the architecture of microservices:
Why is Python the best choice for cloud native microservices development?
Why do I choose Python, and recommend it to as many people as possible? Well, it comes down to the reasons explained in the upcoming subsections.
Python is highly expressive and an easy-to-learn programming language. Even an amateur can easily discover the different functionalities and scope of Python. Unlike other programming languages, such as Java, which focus more on parenthesis, brackets, commas, and colons, Python let's you spend more time on programming and less time on debugging the syntax.
Libraries and community
Python's broad range of libraries is very portable over different platforms, such as Unix, Windows, or OS X. These libraries can be easily extended based on your application/program requirement. There is a huge community that works on building these libraries and this makes it the best fit for business use cases.
As far as the Python community is concerned, the Python User Group (PUG) is a community that works on the community-based development model to increase the popularity of Python around the globe. These group members give talks on Python-based frameworks, which help us build large systems.
The Python interactive mode helps you debug and test a snippet of code, which can later be added as a part of the main program.
Python provides better structure and concept, such as modules, to maintain large programs in a more systematic manner than any other scripting language, such as shell scripting.
Understanding the twelve-factor app
Cloud native applications fit in with an agreement intended to augment versatility through predictable practices. This application maintains a manifesto of sorts called the twelve-factor app. It outlines a methodology for developers to follow when building modern web-based applications. Developers must change how they code, creating a new contract between the developers and the infrastructure that their applications run on.
The following are a few points to consider when developing a cloud native application:
Use an informative design to increase application usage with minimal time and cost to customers using automation
Use application portability across different environments (such as stage and production) and different platforms (such as Unix or Windows)
Use application suitability over cloud platforms and understand the resource allocation and management
Use identical environments to reduce bugs with continuous delivery/deployment for maximum agility of software release
Enable high availability by scaling the application with minimal supervision and designing disaster-recovery architectures
Many of the twelve-factors interact with each other. They focus on speed, safety, and scale by emphasizing on declarative configuration. A twelve-factor app can be described as follows:
Centralized code base: Every code that is deployed is tracked in revision control, and should have multiple instances deployed on multiple platforms.
Dependencies management: An app should be able to declare the dependencies, and isolate them using tools such as Bundler, pip, and Maven.
Defining configuration: Configurations (that is, environment variables) that are likely to be different in different deployment environments (such as development, stage, and production) should be defined at the operating-system level.
Backing services: Every resource is treated as a part of the application itself. Backing services such as databases and message queues should be considered as an attached resource, and consumed equally in all environments.
Isolation in build, release, and run cycle: This involves strict separation between build artifacts, then combining with configuration, and then starting one or more instances from the artifact and configuration combination.
Stateless processes: The app should execute one or more instances/processes (for example, master/workers) that share nothing.
Services port binding: The application should be self-contained, and if any/all services need to be exposed, then it should be done via port binding (preferably HTTP).
Scaling stateless processes: The architecture should emphasize stateless process management in the underlying platform instead of implementing more complexity to the application.
Process state management: Processes should scale up very quickly and shut down gracefully within a small time period. These aspects enable rapid scalability, deployment of changes, and disaster recovery.
Continuous delivery/deployment to production: Always try to keep your different environments similar, whether it is development, stage, or production. This will ensure that you get similar results across multiple environments, and enable continuous delivery from development to production.
Logs as event streams: Logging is very important, whether it is platform level or application level, as this helps understand the activity of the application. Enable different deployable environments (preferably production) to collect, aggregate, index, and analyze the events via centralized services.
- Ad hoc tasks as on-off processes: In the cloud native approach, management tasks (for example, database migration) that run as a part of a release should be run as one-off processes into the environment as opposed to the regular app with long-running processes.
Cloud application platforms such as Cloud Foundry, Heroku, and Amazon Beanstalk are optimized for deploying twelve-factor apps.
Considering all these standards and integrating applications with steady engineering interfaces, that is, handling stateless outline design, makes disseminated applications that are cloud prepared. Python revolutionized application systems with its obstinate, tradition-over-setup way to deal with web improvements.
Setting up the Python environment
As we will demonstrate throughout this book, having the right environment (local or for your automated builds) is crucial to the success of any development project. If a workstation has the right tools, and is set up properly, developing on that workstation can feel like a breath of fresh air. Conversely, a poorly set up environment can suffocate any developer trying to use it.
The following are the prerequisite accounts that we require in the later part of the book:
- A GitHub account needs to be created for source code management. Use the article on the following link to do so:
- AWS and Azure accounts are required for application deployment. Use the articles given on the following links to create these:
Now, let's set up some of the tools that we will need during our development project.
Git (https://git-scm.com) is a free and open source distributed, version control system designed to handle everything, ranging from small to very large projects, with speed and efficiency.
Installing Git on Debian-based distribution Linux (such as Ubuntu)
There are a couple of ways by which you can install Git on a Debian system:
- Using the Advanced Package Tool (APT) package management tools:
You can use the APT package management tools to update your local package index. Then, you can download and install the latest Git using the following commands as the root user:
$ apt-get update -y $ apt-get install git -y
The preceding commands will download and install Git on your system.
- Using the source code, you can do the following:
- Download the source from the GitHub repository, and compile the software from the source.
Before you begin, let's first install the dependencies of Git; execute the following commands as the root user to do so:
$ apt-get update -y $ apt-get install build-essential libssl-dev
libcurl4-gnutls-dev libexpat1-dev gettext unzip -y
2. After we have installed the necessary dependencies, let's go to the Git project repository (https://github.com/git/git) to download the source code, as follows:
$ wget https://github.com/git/git/archive/v1.9.1.zip -Ogit.zip
3. Now, unzip the downloaded ZIP file using the following commands:
$ unzip git.zip $ cd git-*
4. Now you have to make the package and install it as a sudo user. For this, use the commands given next:
$ make prefix=/usr/local all $ make prefix=/usr/local install
The preceding commands will install Git on your system at /usr/local.
Seting up Git on a Debian-based distribution
Now that we have installed Git on our system, we need to set some configuration so that the commit messages that will be generated for you contain your correct information.
Basically, we need to provide the name and email in the config. Let's add these values using the following commands:
$ git config --global user.name "Manish Sethi" $ git config --global user.email firstname.lastname@example.org
Installing Git on Windows
Let's install Git on Windows; you can download the latest version of Git from the official website (https://git-scm.com/download/win). Follow the steps listed next to install Git on a Windows system:
- Once the .exe file is downloaded, double-click on it to run it. First of all, you will be provided with a GNU license, as seen in this screenshot:
Click on Next:
In the section shown in the preceding screenshot, you will customize your setup based on tools that are needed, or you can keep it default, which is okay from the book's perspective.
- Additionally, you can install Git Bash along with Git; click on Next:
In the section seen in the next screenshot, you can enable other features that come along with Git packages. Then, click on Next:
- You can skip the rest of the steps by clicking on Next, and go for the installation part.
Once you complete the installation, you will be able to see a screen like this:
Great!! We have successfully installed Git on Windows!!
This is my preferred way to install Git for Windows on Windows 10. It installs the same package as before, but in one line. If you have not heard of Chocolatey, stop everything, and go learn a bit more. It can install the software with a single command; you don't have to use click-through installers anymore!
Chocolatey is very powerful, and I use it in combination with Boxstarter to set up my dev machines. If you are in charge of setting up machines for developers on Windows, it is definitely worth a look.
Let's see how you would install Git using Chocolatey. I assume you have Chocolatey installed (https://chocolatey.org/install) already (it's a one-liner in Command Prompt). Then, simply open the Administrator Command window, and type this command:
$ choco install git -params '"/GitAndUnixToolsOnPath"'
This will install Git and the BASH tools, and add them to your path.
Installing Git on Mac
Before we begin with the Git installation, we need to install command-line tools for OS X.
Installing the command-line tools for OS X
In order to install any developer, you will need to install Xcode (https://developer.apple.com/xcode/), which is a nearly 4 GB developer suite. Apple offers this for free from the Mac App Store. In order to install Git and the GitHub setup, you will need certain command-line tools, which are part of the Xcode development tools.
If you have enough space, download and install Xcode, which is basically a complete package of development tools.
You will need to create an Apple developer account at developer.apple.com in order to download command-line tools. Once you have set up your account, you can select the command-line tools or Xcode based on the version, as follows:
- If you are on OS X 10.7.x, download the 10.7 command-line tools. If you are on OS X 10.8.x, download the 10.8 command-line tools.
- Once it is downloaded, open the DMG file, and follow the instructions to install it.
Installing Git for OS X
Installing Git on Mac is pretty much similar to how you install it on Windows. Instead of using the .exe file, we have the dmg file, which you can download from the Git website (https://git-scm.com/download/mac) for installation as follows:
- Double-click on the dmg file that got downloaded. It will open a finder with the following files:
- Double-click on the package (that is, git-2.10.1-intel-universal-mavericks.dmg) file; it will open the installation wizard to install, as seen in the following screenshot:
- Click on Install to begin the installation:
- Once the installation is complete, you will see something like this:
Installing and configuring Python
Now, let's install Python, which we will use to build our microservices. We will be using the Python 3.x version throughout the book.
Installing Python on a Debian-based distribution (such as Ubuntu)
There are different ways to install Python on a Debian-based distribution.
Using the APT package management tools
You can use the APT package management tools to update your local package index. Then, you can download and install the latest Python using the following commands as a root user:
$ apt-get update -y $ apt-get install python3 -y
The following packages will automatically be downloaded and installed, as these are the prerequisites for Python 3 installation:
libpython3-dev libpython3.4 libpython3.4-dev python3-chardet
python3-colorama python3-dev python3-distlib python3-html5lib
python3-requests python3-six python3-urllib3 python3-wheel python3.4-de
Once the prerequisites are installed, it will download and install Python on your system.
Using source code
You can download the source code from the GitHub repository and compile the software from the source, as follows:
- Before you begin, let's first install the dependencies of Git; execute the following commands as the root user to do so:
$ apt-get update -y $ apt-get install build-essential checkinstall libreadline-gplv2-
dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-
dev libc6-dev libbz2-dev -y
- Now, let's download Python (https://www.python.org) using the following command from Python's official website. You can also download the latest version in place, as specified:
$ cd /usr/local $ wget https://www.python.org/ftp/python/3.4.6/Python-3.4.6.tgz
- Now, let's extract the downloaded package with this command:
$ tar xzf Python-3.4.6.tgz
- Now we have to compile the source code. Use the following set of commands to do so:
$ cd python-3.4.6 $ sudo ./configure $ sudo make altinstall
- The preceding commands will install Python on your system at /usr/local. Use the following command to check the Python version:
$ python3 -V Python 3.4.6
Installing Python on Windows
Now, let's see how we can install Python on Windows 7 or later systems. Installation of Python on Windows is pretty simple and quick; we will be using Python 3 and above, which you can download from Python's download page (https://www.python.org/downloads/windows/). Now perform the following steps:
- Download the Windows x86-64 executable installer based on your system configuration, and open it to begin the installation, as shown in the following screenshot:
- Next, select the type of installation you want to go with. We will click on Install Now to go for the default installation, as seen in this screenshot:
- Once the installation is complete, you will see the following screen:
Great! We have successfully installed Python on Windows.
Installing Python on Mac
Before we begin with the Python installation, we need to install the command-line tools for OS X. If you have already installed the command-line tools at the time of Git installation, you can ignore this step.
Installing the command-line tools for OS X
In order to install any developer, you need to install Xcode (https://developer.apple.com/xcode/); you will need to set up an account on connect.apple.com to download the respective Xcode version tools.
However, there is another way you can install command-line tools using a utility, which comes along with an Xcode called xcode-select, which is shown here:
% xcode-select --install
The preceding command should trigger an installation wizard for the command-line tools. Follow the installation wizard, and you will be able to install it successfully.
Installing Python for OS X
Installing Python on Mac is quite similar to how you install Git on Windows. You can download the Python package from the official website (https://www.python.org/downloads/). Proceed with the following steps:
- Once the Python package is downloaded, double-click on it to begin the installation; it will show the following pop-up window:
- The next step will be about the release note and the respective Python version information:
- Next, you will need to Agree with the license, which is mandatory for installation:
- Next, it will show you the installation-related information, such as the disk occupied and the path. Click on Install to begin:
- Once the installation is complete, you will see the following screen:
- Use the following command to see whether the Python version is installed:
% python3 -V
Great!! Python is successfully installed.
Getting familiar with the GitHub and Git commands
In this section, we will go through a list of Git commands, which we will be using frequently throughout the book:
- git init: This command initializes your local repository once when you are setting it up for the first time
- git remote add origin <server>: This command links your local <indexentry content="Git command:git remote add origin " dbid="164250" state="mod">directory to the remote server repository so that all the changes pushed are saved in the remote repository
- git status: This command lists the files/directories that are yet to be added, or are modified and need to be committed
- git add * or git add <filename>: This command adds files/directories so that <indexentry content="Git command:git add * or git add " dbid="164250" state="mod">they can be tracked, and makes them ready to be committed
- git commit -m "Commit message": This command helps you commit your track changes in the local machine and generate the commit ID by which the updated code can be identified
- git commit -am "Commit message": The only difference between the previous command and this command is that this opens a default editor to add the commit message based on an operating system such as Ubuntu (Vim) or Windows (Notepad++) after adding all the files to stage
- git push origin master: This command pushes the last committed code from the local directory to the remote repository
Test everything to make sure our environment works.
Here we go. We have installed both Git and Python in the last section, which are needed to begin with building microservices. In this section, we will focus on testing the installed packages and try to get familiar with them.
The first thing we can do is to exercise the Git command, which fetches an external Python code from a repository (usually GitHub) over HTTPs, and copies it into our current workspace in the appropriate directory:
$ git clone https://github.com/PacktPublishing/Cloud-Native-
The preceding command will create a directory named Cloud-Native-Python on your local machine; switch to the Cloud-Native-Python/chapter1 path from the current location.
We will need to install the requirements of the apps that are needed to run it. In this case, we just need the Flask module to be available:
$ cd hello.py $ pip install requirements.txt
Here, Flask works as the web server; we will understand more about it in detail in the next chapter.
Once it is installed successfully, you can run the app using the following command:
$ python hello.py * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
I think we are good to see the output, which is as follows:
$ curl http://0.0.0.0:5000/ Hello World!
If you see this output, then our Python development environment is correctly set up.
Now it's time to write some Python code!
In this chapter, we began with exploring the cloud platform and the cloud computing stack. During this chapter, you learned what the different twelve-factor apps methodologies are, and how they can help develop microservices. Lastly, you got to know about what kind of ideal setup environment a developer machine should have to create or get started with application creation.
In the next chapter, we will start building our microservices by creating backend REST APIs, and testing with the API call or using the Python framework as well.