Search icon CANCEL
Subscription
0
Cart icon
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
$9.99 | ALL EBOOKS & VIDEOS
Save more on purchases! Buy 2 and save 10%, Buy 3 and save 15%, Buy 5 and save 20%
A Developer's Guide to Cloud Apps Using Microsoft Azure
A Developer's Guide to Cloud Apps Using Microsoft Azure

A Developer's Guide to Cloud Apps Using Microsoft Azure: Migrate and modernize your cloud-native applications with containers on Azure using real-world case studies

By Hamida Rebai Trabelsi
$33.99 $9.99
Book Feb 2023 274 pages 1st Edition
eBook
$33.99 $9.99
Print
$41.99
Subscription
$15.99 Monthly
eBook
$33.99 $9.99
Print
$41.99
Subscription
$15.99 Monthly

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon AI Assistant (beta) to help accelerate your learning
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now
Table of content icon View table of contents Preview book icon Preview Book

A Developer's Guide to Cloud Apps Using Microsoft Azure

An Introduction to the Cloud-Native App Lifecycle

This first chapter is about introducing the basic concepts of cloud-native development and the lifecycle involved. You will learn the basic concepts behind building and deploying applications on any cloud platform, including adopting a microservices architecture, containerization, and orchestration.

To enable developers to build applications with more flexibility and more portability compared to applications hosted on traditional servers or virtual machines (VMs), we will learn how to use containers and serverless architecture.

To accelerate the product development process and improve the quality of our apps, we will follow the Twelve-Factor Application design principles and methodology. As projects grow, code bases also become more complex, and it is strongly recommended that you test new versions of software.

In this chapter, we’re going to cover the following main topics:

  • An introduction to cloud-native applications
  • Application design
  • Application lifecycles
  • The Twelve-Factor Application design methodology
  • Serverless applications of cloud-native applications

An introduction to cloud-native applications

In the continuously competitive market of digital companies, the main issue is IT agility.

However, with technology evolving every day, companies are struggling to catch up with digital transformation, adopt new trends such as the cloud, artificial intelligence (AI), mobility, or the Internet of Things (IoT), and change their business models in order to be able to adapt to the new reality of the market.

This delay is caused by the interdependencies of services related to the lifecycle of an IT project, where changes to the source code of a classic monolithic client-server application can destabilize systems in the maintenance phase.

For this reason, companies are looking to adopt more optimized, scalable architectures to ensure resiliency and continuous availability, while minimizing resource consumption costs and implementing cloud-native applications.

This change should be made quickly, especially for start-ups that want to build rigid infrastructures with minimal costs.

A cloud-native approach includes all the concepts of building and deploying scalable applications in modern, dynamic environments on any cloud platform, be it a public, private, or hybrid cloud. Organizations use cloud-native technologies to build and run scalable applications.

Today, application environments are modern, automated, and dynamic because we can publish any application and store data in a public, private, or hybrid cloud. We can use technologies such as containers, service fabric, and immutable infrastructure, and patterns such as microservices, declarative APIs, and event buses. These techniques help us implement loosely coupled solutions that are easy to maintain, resilient, and observable.

The robust automation of environments, as in the case of infrastructure as code, allows engineers to make major changes frequently without worrying about the impact and with minimal effort.

A cloud-native application has specific features that present the pillars of cloud-native architecture and a design pattern, used to build a new application from scratch that can be deployed in a cloud environment.

Microservices are a part of cloud-native application architecture and run as a bundle independently of each other on a containerized and orchestrated platform. They connect and communicate through APIs.

Figure 1.1 – The pillars of cloud-native architecture

Figure 1.1 – The pillars of cloud-native architecture

Applications based on cloud-native application architecture are reliable and provide scalability and performance to meet the recurring goal of achieving a fast time to market, if they are well designed.

Application design

One of the most important steps in the lifecycle of an application is the design and architecture of the app. This is the most critical aspect prior to starting the implementation, deployment, continuous delivery, and maintenance of an application. The evolution in technologies and patterns always influences the design of applications as we seek to ensure performance and security.

What happens if the application stops working, or crashes for no reason or due to a lack of resources? How are you going to debug it if the error isn’t clear, or if the logs aren’t really good enough?

If you need to ask these questions, then you are on the wrong path – you are not working in a cloud-native application context.

The design of cloud-oriented applications has the objective of taking advantage of the benefits of the cloud. Even the software and services that manage these applications will be deployed in the cloud.

Cloud-native applications are typically microservices embedded in containers running on cloud computing infrastructure. Cloud-native applications use a microservice architecture. This architecture ensures the allocation of resources to each service used by the application more efficiently than the old approaches, such as monolithic applications. This makes an application flexible and adaptable to a cloud-oriented architecture.

Monoliths versus microservices

It is really important to understand the difference between the traditional monolithic approach and the microservices approach before defining the concept of microservices.

To scale a monolithic application, we have to clone the entire application on multiple servers or VMs. But for a microservices application, scaling is done by deploying every service in multiple instances across servers or VMs.

In a microservices approach, every application is composed of a collection of services that are related to specific functionalities. Every service can be developed, tested, deployed, versioned, and scaled. Monolithic applications are simple to use and easy to develop, deploy, and test, but they have limitations in size and complexity. Despite the simplicity of horizontal scaling, where we run multiple application copies behind a load balancer, it is difficult to do in the case of multiple modules that have conflicting resource requirements.

Microservices are very similar to beehives. Within a hive, thousands of bees coexist and help each other for a single common goal – the survival of the colony. The queen, the workers, and the drones – each one has their peculiarities and must, therefore, assume distinct tasks.

Monolithic and microservices architectures

Microservices are regularly discussed now in articles, on blogs, on social media, and even at conference presentations.

How do we use microservices to tackle complexity? In recent years, software architecture has evolved rapidly, from spaghetti-oriented architected where everything was a big jumble, to lasagna-oriented architecture where we can see the layers of architecture, to ravioli-oriented architecture where we talk about microservices. In this latter architecture, we split the application into sets of services instead of building a monolithic application. Maybe we will see pizza-oriented architecture next, where we use a serverless approach. Let’s now take a look at the layered architecture pattern and compare it to microservices.

The layered architecture pattern is the most common architecture pattern and is otherwise known as the n-tier architecture pattern. For distributed n-tier client/server applications, when taking a monolithic approach, you start with a hexagonal modular architecture, where you separate the domain model and the adapters (the devices used for inputs and outputs).

A monolithic application is composed of several layers, including different types of components or layers.

In this classic example, illustrated in Figure 1.2, we have four layers, from the user interface to the database where we store our data:

  • Presentation layer: This presents the user interface layer; it can be a web or mobile or desktop application.
  • Services layer: This is a set of standards, techniques, and methods. An application is split into services based on functionality and domain. This layer is responsible for handling HTTP requests and responding with either HTML or JSON/XML, as in the case of API services.
  • Business logic layer: This holds the application’s business logic, the heart of our application. This entails custom business rules, such as operation definitions, constraints, and algorithms, that manage the exchange of information between the database layer and the presentation layer.
  • Database access layer: This is an abstraction of the logical data model. The modification of the logical data model is done in the business layer, but we can perform even more complex data manipulations from multiple sources and send them back to the business layer. This layer will ensure access to the database.
Figure 1.2 – N-tier architecture pattern for monolithic and microservices architectures

Figure 1.2 – N-tier architecture pattern for monolithic and microservices architectures

The architecture presented here is logically modular, but the application is packaged and deployed in a single package as a monolith.

Let’s explore the different elements of the monolithic approach:

  • Monolithic applications are easy to set up because they involve a single complete package. They include all the components, such as the GUI, business layer, data, and related necessary services encapsulated in a single package.
  • This single package is developed in sequential order. At the start of each project, the designers work on the design of the product as well as the necessary documents to meet the needs of the client user. Then, the developers implement code and send it to the quality assurance department in order to carry out the necessary tests.
  • The team of testers runs different types of tests, including integration tests, interface tests, and even performance tests, to identify errors and evaluate the performance of the cloud-native application.
  • If they detect errors, code is sent back to the developers so that it can be debugged and corrected.
  • Once the code passes all the tests, it is deployed in a test production environment similar to the final environment and then deployed in a real environment.
  • If you want to modify the code, add a new feature, or even remove an old feature, you have to start the whole process again. If several teams are working on the same project, taking into account the changes in the teams as developers come and go, coordinating code changes is a challenge and will take a lot of time. Moreover, the deployment of a software project always requires a specific infrastructure configuration as well as the implementation of an extended functional test mechanism. Therefore, in a complex project with dynamic requirements, the whole process is inefficient and time-consuming. Microservices architecture can help to solve most of these challenges.

For microservices, the idea is very simple – we divide our application into a set of smaller interconnected services instead of creating a single monolithic application. This is an architectural design model based on the architectural principles of domain-driven design and DevOps – that is, if we have several domains in our application that can interact independently.

Each microservice represents a small application with its own hexagonal architecture. This application is composed of business logic and data.

Some microservices may implement a user interface, such as a web page or mobile application, while others may expose or consume a Representational State Transfer (REST), Application Programming Interface (API), or even a Remote Procedure Call (RPC). Services may be using a message-based system or simply consuming APIs provided by other services.

The services are independent, making it easy for developers to work independently on them without affecting the entire app. Developers also have the freedom to use different languages in different parts of the code simultaneously, via a central repository that acts as a version control system and updates specific features without disrupting the software or causing application downtime.

Developers can use a central container orchestrator to improve performance by managing automatic scheduling and the allocation of resources on demand, by using scaling features. Finally, microservices are adopted as enablers of DevOps and Continuous Integration (CI)/Continuous Delivery (CD), allowing them to be updated and deployed faster and without issues.

Microservices are deployed independently, each service having its own database, as shown in the previous diagram.

Microservices allow us to scale and deploy parts of an application independently. They offer great distributed software challenges, but with these benefits, microservices are not a universal solution for every app in the cloud, as they are intended mainly for large, scalable, and long-term distributed applications. Do not underestimate the complexity involved in implementation and testing.

We have discussed the application design and have explored the difference between monolithic and microservices approaches. Let’s now move on to application lifecycles.

Application lifecycles

The term application lifecycle refers to the cyclical software development process, which includes planning and monitoring, development, testing, deployment, operation, monitoring, collaboration, and communication.

Figure 1.3 – Application lifecycle

Figure 1.3 – Application lifecycle

Application Lifecycle Management (ALM) entails the use of a set of tools, teams, and processes to manage the lifecycle of an application, from requirements management, project management, design and software architecture, development, unit and integration testing, maintenance, update requirement management, CI, delivery integration, deployment, and release management to the end of life.

ALM consists of the following five stages.

Stage 1 – application governance

Application governance is the initial stage of decision-making and includes requirements management. During this stage, the team begins to define the functions and functionalities of the application that are required to achieve the objectives defined by the client. This involves designing the concept of an app based on these user requirements.

Stage 2 – development

This is the most important stage in the application lifecycle because this stage determines the creation of the application. The developers take the functionalities planned in the previous step and prepare a development plan to achieve them. In most cases, these functionalities will be broken down into chunks and then assigned to the appropriate teams to develop a schedule for the release of each phase. After creating the application, the teams then start implementing code and integrating it according to the plan.

Stage 3 – quality assurance – software testing

Once the application has been implemented in line with the requirements, the next stage is the testing phase to ensure that the application actually meets all the requirements, works without errors, and provides an appropriate user experience. Test scenarios and environments are prepared and application performance testing is performed. The testers provide feedback at the end and publish reports on errors encountered, including unconfirmed ones and even bugs, and the development team updates the product based on this feedback.

Stage 4 – deployment

This stage begins when the product is ready to be deployed to production for end users. This can be done via several methods, depending on the needs of the customers. A continuous deployment strategy can be put in place to facilitate the automation of this process.

Stage 5 – operations and maintenance

The ALM process does not stop at the point of product deployment to users – it continues with the ongoing operation and maintenance of the product. To confirm that the software is meeting the business objectives, in-use performance monitoring should be put in place to prevent overloads or service downtime issues. This also allows the team to find and resolve any problems encountered, along with providing updates and improvements.

The final phase of this stage involves the withdrawal of the product according to criteria defined in advance by the team. This details the reason for the decision to withdraw the software and move to a new version or a new product.

Now we have discussed the application lifecycle model, let’s now move on to the Twelve-Factor Application design methodology.

The Twelve-Factor Application design methodology

Nowadays, software is usually provided as a service, whether in the form of web applications or software as a service (SaaS). The Twelve-Factor App is an influential software application design model for designing a scalable application architecture.

Note

The Twelve-Factor App was published in 2011 by Adam Wiggins and provides a set of principles to follow in order to create code that can be released reliably, updated quickly, and maintained consistently.

The Twelve-Factor App methodology page can be found at https://12factor.net/. The following is a summary of the principles:

  • Code base – “One code base tracked in revision control; many deploys”:

Each application must have its own code base (or repository); multiple applications should not share the same code. A code base is a repository of versioned code. However, we must avoid creating multiple code bases for different versions – version management must be managed by a repository tool such as Git. It is recommended that for all deployment environments, there should be only one repository, not multiple.

  • Dependencies – “Explicitly declare and isolate dependencies”:

Every application has dependencies with other packages. Most of the dependencies require the use of external dependencies, and the objective is to deploy the application with its dependencies because they form a whole, a kind of bundle.

Therefore, you have to declare the dependencies explicitly and precisely before creation, and then isolate these dependencies at runtime.

This is enabled through NuGet tools in .NET Framework or npm for JavaScript. These tools define their dependencies inside the manifests, including very specific versions, and have the role of then ensuring that the dependencies are running correctly.

  • Config – “Store config in the environment”:

The idea is to separate the code from the configuration of the application itself. This configuration can be placed in environment variables, which will be injected at runtime, and the configuration and the code will thus be in separate files. However, sensitive data such as credentials and keys should not appear visibly in the code, nor even in the configuration file, for security reasons. Good examples of external configuration files are the appsettings.json file in .NET projects, a Kubernetes manifest file, and a docker-compose.yml file.

  • Backing services – “Treat backing services as attached resources”:

A support service is any service your application needs for its functionality, such as databases, mail servers, mail systems, caching systems, and even services running business functionality and security.

  • Build, release, and run – “Strictly separate build and run stages”:

The build process focuses on building everything that is needed for your application, the release stage combines the output of the previous stage (the build stage) with the configuration values (both environmental and application-specific), and the run stage uses tools such as containers and processes to launch the application, meaning that it needs a specific environment to run the application that’s distinct from the environments used in the build and release stages.

  • Processes – “Execute the app as one (or more) stateless process(es)” :

We are talking here about the process state. The application can work as a collection of stateless processes, implying that no trace of the state of another process (such as the state of the session) will be saved. Equally, the workflow and instances can be added and removed to handle a particular workload at a given time. A stateless process makes scaling easier. In conclusion, each process is independent of the others, which prevents surprises.

  • Port binding – “Export services through port binding”:

An application is identified in the network by a port number or a domain name known to the Domain Name System (DNS). The idea behind the principle of port binding is that the use of ports in the network is very efficient – for example, port 80 is used for web servers running under HTTP, port 443 is the default port number for HTTPS, port 22 is for SSH, port 3306 is the default port for MySQL, and port 27017 is the default port for MongoDB.

  • Concurrency – “Scale-out through the process model”:

The concurrency factor states that applications should be able to scale up or down elastically, depending on their workload.

  • Disposability – “Maximize robustness with fast startup and graceful shutdown.”:

The availability principle states that applications should start and stop properly without slowness or errors. This means that users can access the application without experiencing service downtime issues.

In the event of a shutdown, it is recommended to ensure that all database connections and other network resources are properly terminated and that all shutdown activity is logged.

  • Dev/prod parity – “Keep development, staging, and production as similar as possible”:

The dev/prod parity factor focuses on the importance of keeping the development, simulation, acceptance, and production environments as similar as possible. But why? Because it is important to identify potential bugs and errors during development and testing before an application is released to production, all deployment environments are similar but independent.

  • Logs – “Treat logs as event streams”:

The logs factor highlights the importance of ensuring that your application doesn’t itself manage the routing, storage, or analysis of its output stream (i.e., logs).

For example, one consumer might be interested in error data, but another consumer is interested in request/response data. Yet another consumer is interested in storing all log data for event archiving. This means that logs should be treated as a stream of log events. If we remove an application, the log data persists long after.

  • Admin processes – “Run admin/management tasks as one-off processes”:

This factor recommends not setting up one-time administration or management tasks in the application.

The examples given on https://12factor.net/ are for migrating databases and running one-time scripts to perform cleanup.

Now that we have discussed the Twelve-Factor App design methodology, let’s now move on to serverless applications.

Serverless applications

In the world of cloud computing, in order to better understand public cloud services such as Microsoft Azure, it is necessary to understand the shared responsibility model and distinguish between what will be managed by the cloud provider and the tasks that are your responsibility to manage.

Workload responsibilities vary depending on the workload. These workloads can be hosted as Software as a Service (SaaS), Platform as a Service (PaaS), Infrastructure as a Service (IaaS), or in an on-premises data center. What is interesting about cloud providers is that they provide the infrastructure required to run applications for the users. Cloud providers support the execution of the servers deployed to the dynamic management of the resources of the machine. These machines can be scaled according to the runtime load.

The user focuses only on the development and deployment of applications.

Serverless computing, also known as Function as a Service (FaaS), is a cloud-native development model that allows developers to build and run applications without having to manage servers. Serverless computing allows developers to only write the code, while the backend infrastructure is managed by the cloud provider. Developers can write multiple functions in order to implement business logic in an application, and then all these functions can be easily integrated to communicate with each other. Applications using this pattern are said to be using serverless architecture.

Microservices and serverless are two major concepts in cloud computing today.

Serverless architecture is a very commonly implemented aspect of microservices architecture. In microservices architecture, the application is broken down into small independent pieces and each one has its own task to fulfill. Deployment and management of microservices are widely used in a serverless model.

In a serverless model, application code is executed on demand in response to pre-configured triggers by the application developer. However, the benefits of building applications from microservices are perhaps most apparent when the application is hosted in the cloud using serverless architecture.

For most use cases, code is executed in stateless containers. Code execution can be triggered by different events, such as sending HTTP requests, database events, and queuing services. Meanwhile, in the application, we can trigger monitoring alerts, or even initiate file downloads and scheduled events (as in the case of cron jobs).

FaaS is a subset of serverless computing, focusing on event-driven triggers where code is executed in response to events or requests.

The use of serverless computing improves developer productivity and reduces the time required to release and deliver new applications to the market. However, along with the benefits of serverless computing, challenges also present themselves in the difficulty of monitoring and maintaining serverless applications. Consideration should also be given to the security issues around serverless architecture, such as the need to analyze short-lived functions in order to scan for vulnerabilities and prevent code injection.

The challenges of cloud-native applications

Cloud-native applications take advantage of the cloud operating model, the benefits of which we discussed previously. However, as well as benefits, there are also challenges with cloud-native development that every organization should consider before beginning their move to it.

Although the theory behind the development of cloud-oriented applications seems clear and simple enough, problems remain at the level of implementation, especially if an enterprise has longstanding legacy applications.

Let’s take a look at some of the most common challenges faced by enterprises in their cloud-native journeys.

The challenges of service discovery and CI/CD pipelines for microservices applications

If we have several microservices that communicate with each other, these microservices run in different instances. The number of service instances and their locations change dynamically. The number of service instances and their locations change dynamically. The service discovery mechanism helps us to locate each instance.

CI encourages continuous code merging and testing, leading to the early detection of bugs. Other benefits include less time wasted dealing with merge issues and faster feedback to the development team.

CD is an extension of CI. It is a semi-manual process that allows developers to deploy all changes to their customers with a simple click of a button. It also allows you to auto-deploy code changes to diverse environments (development, staging, testing, QA, production, and so on…) so that companies can quickly troubleshoot and fix bugs and respond to changing business needs.

This challenge of service discovery and CI and CD for a microservices application involves being able to identify where dynamically deployed microservices are deployed, especially in the case of additional instances.

Microservices are composed of a set of separate components and services, each managed by a separate team with an independent lifecycle and an independent CI/CD pipeline.

There are many challenges in the implementation of microservices:

  • Low visibility into the quality of changes introduced in each service’s pipeline in the application
  • Uncertainty about whether each launched pipeline meets security and compliance requirements
  • The independence of each pipeline can pose a pipeline control problem – for example, security vulnerabilities, performance issues, a flawed automated testing system, version control, and technological limitations
  • Infrastructure duplication caused by multiple services and pipelines

Security and observability challenges

Cloud-native applications present additional challenges for security and risk management because they are inherently complex.

Several independent services to secure

Especially if we’re using a combination of containers, Kubernetes, and serverless functions to take advantage of microservices, we’ll have multiple services to protect in multiple environments throughout the application lifecycle.

Regular changes in environments

In the agile methodology, teams unveil a new version every week (or even daily, in order to correct a bug, for example). This presents a challenge in terms of the security of what is deployed, which makes the task of security personnel more difficult in terms of taking control of these deployments without slowing down the speed of release each time.

Zero trust and service identity

Unlike monolithic applications that use a physical machine or a virtual machine as a reference point or the stable node of a network, cloud-native applications and, especially, services are deployed in different places. They can even be replicated in several places, providing us with the ability to stop and then restart them at any time. The security of these services requires a network security model that takes into consideration the context of the application, the identity of the microservices, and their networking requirements. This leads us to build a model of zero trust around these requirements.

Zero trust is a strategic approach that consists of protecting organizations by eliminating implicit trust and continuously validating all phases of digital interactions. Zero-trust security is an IT security model that requires strict identity verification for all persons and devices attempting to access resources on a private network, whether inside or outside the network perimeter.

Summary

In this chapter, we learned about application design and lifecycle management, from planning to maintenance. We covered the Twelve-Factor App design methodology, an influential software application design model to build scalable application architectures. Next, we examined serverless applications, also known as FaaS, and finally, we learned about the challenges faced during the implementation of a cloud-native application.

In the next chapter, we will learn about the cloud computing journey, the cloud adoption methodologies, and the best practices, different tools, and resources that can simplify and accelerate your migration to the cloud.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Learn various methods to migrate legacy applications to cloud using different Azure services
  • Implement continuous integration and deployment as a best practice for DevOps and agile development
  • Get started with building cloud-based applications using containers and orchestrators in different scenarios

Description

Companies face several challenges during cloud adoption, with developers and architects needing to migrate legacy applications and build cloud-oriented applications using Azure-based technologies in different environments. A Developer’s Guide to Cloud Apps Using Microsoft Azure helps you learn how to migrate old apps to Azure using the Cloud Adoption Framework and presents use cases, as well as build market-ready secure and reliable applications. The book begins by introducing you to the benefits of moving legacy apps to the cloud and modernizing existing ones using a set of new technologies and approaches. You’ll then learn how to use technologies and patterns to build cloud-oriented applications. This app development book takes you on a journey through three major services in Azure, namely Azure Container Registry, Azure Container Instances, and Azure Kubernetes Service, which will help you build and deploy an application based on microservices. Finally, you’ll be able to implement continuous integration and deployment in Azure to fully automate the software delivery process, including the build and release processes. By the end of this book, you’ll be able to perform application migration assessment and planning, select the right Azure services, and create and implement a new cloud-oriented application using Azure containers and orchestrators.

What you will learn

Get to grips with new patterns and technologies used for cloud-native applications Migrate old applications and databases to Azure with ease Work with containers and orchestrators to automate app deployment Select the right Azure service for deployment as per the use cases Set up CI/CD pipelines to deploy apps and services on Azure DevOps Leverage Azure App Service to deploy your first application Build a containerized app using Docker and Azure Container Registry

Product Details

Country selected

Publication date : Feb 17, 2023
Length 274 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781804614303
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon AI Assistant (beta) to help accelerate your learning
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Feb 17, 2023
Length 274 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781804614303
Concepts :

Table of Contents

20 Chapters
Preface Chevron down icon Chevron up icon
1. Part 1 – Migrating Applications to Azure Chevron down icon Chevron up icon
2. Chapter 1: An Introduction to the Cloud-Native App Lifecycle Chevron down icon Chevron up icon
3. Chapter 2: Beginning Your Application Migration Chevron down icon Chevron up icon
4. Chapter 3: Migrating Your Existing Applications to a Modern Environment Chevron down icon Chevron up icon
5. Chapter 4: Exploring the Use Cases and Application Architecture Chevron down icon Chevron up icon
6. Part 2 – Building Cloud-Oriented Applications Using Patterns and Technologies in Azure Chevron down icon Chevron up icon
7. Chapter 5: Learning Cloud Patterns and Technologies Chevron down icon Chevron up icon
8. Chapter 6: Setting Up an Environment to Build and Deploy Cloud-Based Applications Chevron down icon Chevron up icon
9. Chapter 7: Using Azure App Service to Deploy Your First Application Chevron down icon Chevron up icon
10. Part 3 – PaaS versus CaaS to Deploy Containers in Azure Chevron down icon Chevron up icon
11. Chapter 8: Building a Containerized App Using Docker and Azure Container Registry Chevron down icon Chevron up icon
12. Chapter 9: Understanding Container Orchestration Chevron down icon Chevron up icon
13. Chapter 10: Setting Up a Kubernetes Cluster on AKS Chevron down icon Chevron up icon
14. Part 4 – Ensuring Continuous Integration and Continuous Deployment on Azure Chevron down icon Chevron up icon
15. Chapter 11: Introduction to Azure DevOps and GitHub Chevron down icon Chevron up icon
16. Chapter 12: Creating a Development Pipeline in Azure DevOps Chevron down icon Chevron up icon
17. Assessments Chevron down icon Chevron up icon
18. Index Chevron down icon Chevron up icon
19. Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.