Reader small image

You're reading from  Practical Cloud-Native Java Development with MicroProfile

Product typeBook
Published inSep 2021
PublisherPackt
ISBN-139781801078801
Edition1st Edition
Concepts
Right arrow
Authors (5):
Emily Jiang
Emily Jiang
author image
Emily Jiang

Emily Jiang is a Java Champion, a cloud-native architect with practical experience of building cloud-native applications. She is a MicroProfile guru, leading a number of MicroProfile specifications as well as the implementations in Open Liberty. She is a well-known international conference speaker.
Read more about Emily Jiang

Andrew McCright
Andrew McCright
author image
Andrew McCright

Andy McCright is IBM's Web Services Architect with 20 years of experience building Enterprise Java runtimes. He leads the MicroProfile Rest Client & GraphQL projects and contributes to Open Liberty, Jakarta REST, CXF, RESTEasy, and more. He is also a blogger.
Read more about Andrew McCright

John Alcorn
John Alcorn
author image
John Alcorn

John Alcorn is an application modernization architect in the Cloud Engagement Hub, specializing in helping customers modernize their traditional Java EE applications to the cloud. He developed and maintains the Stock Trader application that shows how to build a composite application out of MicroProfile-based microservices in Java. You can connect with John via Twitter.
Read more about John Alcorn

David Chan
David Chan
author image
David Chan

David Chan is a software developer at IBM who works on the observability and serviceability components of the Open Liberty project. He is involved with the MicroProfile project with a specialization in the MicroProfile Metrics component.
Read more about David Chan

Alasdair Nottingham
Alasdair Nottingham
author image
Alasdair Nottingham

Alasdair Nottingham is a software developer and lead architect for Open Liberty, and WebSphere. He has been involved with the MicroProfile and Jakarta EE projects to a varying extent since their inception.
Read more about Alasdair Nottingham

View More author details
Right arrow

Chapter 1: Cloud-Native Applications

When talking about cloud-native applications, it is important to have a shared understanding of what cloud-native means. There is often an assumption that cloud-native and microservices are the same thing, but actually, microservices are just one architectural pattern that can be used when building cloud-native applications. That leads us to the questions: what is a cloud-native application, and what are the best practices for building them? This will be the focus of this chapter.

In particular, we will cover these main topics:

  • What is a cloud-native application?
  • Introducing distributed computing
  • Exploring cloud-native application architectures
  • Cloud-native development best practices

This chapter will provide some grounding for understanding the rest of the book as well as helping you to be successful when building cloud-native applications.

What is a cloud-native application?

Back in 2010, Paul Freemantle wrote an early blog post about cloud-native (http://pzf.fremantle.org/2010/05/cloud-native.html) and used the analogy of trying to drive a horse-drawn cart on a 6-lane highway. No matter how much better a highway is as a road, there is a limit to how much a cart can transport and how quickly. You need vehicles that are designed for driving on a highway. The same is true of applications.

An application designed to run in a traditional data center is not going to run well on the cloud compared to one that was designed specifically to take advantage of the cloud. In other words, a cloud-native application is one that has been specifically designed to take advantage of the capabilities provided by the cloud. The Stock Trader application from Chapter 8, Building and Testing Cloud-Native Applications, is an example of such an application. A real-world example of microservices is Netflix.

Perhaps at its core, the promise of the cloud is being able to get compute resources on-demand, in minutes or seconds rather than days or weeks, and being charged based on incremental usage rather than upfront for potential usage – although, for many, the attraction is just no longer having to manage and maintain multiple data centers. The commoditization of compute resources that the cloud provides leads to a very different way of thinking about, planning for, and designing applications, and these differences significantly affect the application. One of the key changes in application design is the degree to which applications are distributed.

Introducing distributed computing

Most cloud-native architectures involve splitting an application into several discrete services that communicate over a network link rather than an in-process method invocation. This makes cloud-native applications implicitly distributed applications, and while distributed computing is nothing new, it does increase the need to understand the benefits and pitfalls of distributed computing. When building distributed applications, it is important to consider and understand the eight fallacies of distributed computing. These are as follows:

  • The network is reliable.
  • Latency is zero.
  • Bandwidth is infinite.
  • The network is secure.
  • Topology doesn't change.
  • There is one administrator.
  • Transport cost is zero.
  • The network is homogeneous.

In essence, what these fallacies mean is that a network call is slower, less secure, less reliable, and harder to fix than invoking a Java method call or a C procedure. When creating cloud-native applications, care needs to be taken to ensure these fallacies are correctly accounted for, otherwise, the application will be slow, unreliable, insecure, and impossible to debug.

An application consisting of multiple services interacting across the network can produce many benefits, such as the ability to individually scale and update services, but care must be taken to design services to minimize the number of network interactions required to deliver the ultimate business solution.

As a result, several cloud-native architectures can be used to build cloud-native applications that present different tradeoffs between the benefits and challenges of distributed computing.

Exploring cloud-native application architectures

Since 2019, there has been increasing discussion in the industry about the pros and cons of microservices as a cloud-native application architecture. This has been driven by many microservice-related failures and as a result, people are now discussing whether some applications would be better off using different architectures. There has even been the start of a renaissance around the idea of building monoliths, after several years of those kinds of applications being seen as an anti-pattern.

While it is attractive to think of cloud-native as just being a technology choice, it is important to understand how the development processes, organization structure, and culture affect the evolution of cloud-native applications, the system architecture, and any ultimate success. Conway's Law states the following:

Any organization that designs a system will produce a design whose structure is a copy of the organization's communication structure.

A simple way of thinking of this is if your development organization is successful at building monoliths, it is unlikely to be successful at building microservices without some kind of reorganization. That doesn't mean every team wanting to do cloud-native should go out and reorganize; it means that you should understand your strengths and weaknesses when deciding what architecture to adopt. You should also be open to reorganizing if necessary.

This section discusses a number of the more popular cloud-native application architectures out there and the pros and cons of using them. Let's start with microservices.

Microservices

Although Netflix didn't invent the idea of microservices, their use of the architecture did popularize it. A single microservice is designed to do one thing. It doesn't, despite the name, mean that service is small or lightweight – a single microservice could be millions of lines of code, but the code in the microservice has a high level of cohesion. A microservice would never handle ATM withdrawals and also sell movie tickets. Identifying the best way to design a cloud-native application into a series of well-designed microservices is not a simple task; different people might take different views of whether a deposit into and withdrawal from a bank account would warrant a single microservice or two.

Microservices usually integrate with each other via REST interfaces or messaging systems, although gRPC and GraphQL are growing in popularity. A web-facing microservice is likely to use a REST or GraphQL interface, but an internal one is more likely to use a messaging system such as Apache Kafka. Messaging systems are generally very resilient to network issues, since once the messaging system has accepted the message, it will store the message until it can be successfully processed.

The key promise of the microservice-based architecture is that each microservice can be independently deployed, updated, and scaled, allowing teams that own disparate microservices to work in parallel, making updates without the need to coordinate. This is perhaps the biggest challenge with microservice architectures. It is relatively common for well-meaning developers who set out to build microservices to end up building a distributed monolith instead. This often occurs because of poorly defined and poorly documented APIs between services and insufficient acceptance testing, resulting in a lack of trust in updating a single microservice without impacting the others. This is called a distributed monolith because you end up with all the disadvantages of a monolith and microservices and miss out on the benefits.

In an ideal world, a development organization building microservices will align the microservices with an individual development team. This may be difficult if there are more microservices than development teams. As the number of microservices a team manages increases, more time will be spent managing the services rather than evolving them.

Monoliths

Monoliths are strongly associated with pre-cloud application architectures and are considered an anti-pattern for cloud-native applications. For that reason, it might seem strange that this appears in a discussion of cloud-native architecture. However, there are some reasons for including them.

The first is really just the reality that monoliths are the simplest kind of application to build. While the individual services cannot be independently scaled, as long as the monolith has been designed to scale, this may not be an issue.

The second is that there are a lot of monoliths out there and many enterprises are moving them to the cloud. MicroProfile provides additional APIs to retrofit many cloud-native behaviors into an existing app.

The trick with a monolith is ensuring that despite the colocation of services in a single deployment artifact, the monolith can start quickly enough to enable dynamic scaling and restart if there is an application failure.

Typically, a small development organization will benefit from monoliths since there is only a single application to build, deploy, and manage.

Macroservices

Macroservices sit somewhere between a monolith and a microservice architecture and are also referred to as modular monoliths. With macroservices, the services are combined into a small number of monoliths that interoperate in the same way that a series of microservices would.

This provides many of the benefits of microservices but significantly simplifies the operations environment since there are fewer things to manage. If a macro-service has been written well, then individual services in that macro-service can be broken out if they would benefit from an independent life cycle. A well-known example of a macro-service is Stack Overflow. Stack Overflow (https://www.infoq.com/news/2015/06/scaling-stack-overflow/) is famously a monolith except for the tagging capability, which is handled in another application due to the different performance needs. This split moves it from being a pure monolith into the realm of macroservices (although Stack Overflow uses the term monolith-plus).

This architecture can work especially well when a development organization is organized into a smaller number of teams than the number of services.

Function as a Service

Function as a Service (FaaS), often referred to as serverless, is an architecture where a service is created as a function that is run when an event occurs. The function is intended to be fast starting and fast executing and can be triggered by things such as HTTP requests or messages being received. FaaS promises that you can deploy the function to a cloud, and it is started and executed by the event trigger, rather than having to have the function running just in case. Typically, public cloud providers that support FaaS only charge for the time the function is running. This is very attractive if the event is relatively uncommon since there is no financial cost in having a system running for when an uncommon event occurs.

The challenge with this architecture is that your function needs to be able to start quickly and usually has to finish executing quickly too; as a result, it isn't suitable for long-running processes. It also doesn't remove the server; the server is still there. Instead, it just shifts the cost from the developer to the cloud provider. If the cloud provider is a public cloud, then that is their problem, since they are charging for the function runtime, but if you are deploying to a private cloud, this becomes your problem, thereby removing some of the benefits.

Event sourcing

Often, we think of services as providing a REST endpoint, and services make calls to them. In fact, factor VII of the Twelve-Factor App (discussed in the next section) explicitly states this. The problem with this approach is that a REST call is implicitly synchronous and prone to issues if the service provider is running slow or failing.

When providing an external API to a mobile app or a web browser, a REST API is often the best option. However, for services within an enterprise, there are many benefits to using a messaging system such as Kafka and using asynchronous events instead. A messaging system that can guarantee that the message will be delivered allows the client and service to be decoupled such that an issue with the service provider doesn't prevent the request from occurring; it just means it'll be processed later. A one-to-many event system makes it easy for a single service to trigger multiple different actions with just a simple message send. Different actions can be taken by different services receiving a copy of the message and if new behavior is required, an additional service can receive the same message without having to change the sending service. A simple illustration of this might be that an event that orders an item can be processed by the payment service, the dispatch service, a reorder service, and a recommendation service that provides recommendations based on past purchases.

One of the trends with cloud-native applications is that data is moved from a centralized data store closer to the individual services. Each service operates on data it holds, so if something happens to slow down the data store for one service, it doesn't have a knock-on effect on others. This means that new mechanisms are required to ensure data consistency. Using events to handle data updates helps with this, since a single event can be distributed to every service that needs to process the update independently. The updates can take effect even if the service is down when the update is triggered. Another advantage of this approach is that if the data store fails, it can be reconstructed by replaying all the events.

Having chosen the architecture (or architectures) for building your cloud-native application, the next step is to start building it, and to do that, it is a good idea to understand some of the industry best practices around cloud-native application development.

Cloud-native development best practices

There are many best practices that, if followed, will improve the chances that your cloud-native application will be a success. Following these best practices doesn't guarantee success, just as ignoring them doesn't guarantee failure, but they do encode key practices that have been shown to enhance the chances of success. The most famous set of best practices is the Twelve-Factor App.

Twelve-Factor App

The Twelve-Factor App (https://12factor.net) is a set of 12 best practices that, if followed, can significantly improve the chance of success when building cloud-native applications. Some of the factors would be considered obvious by many software developers even outside of cloud-native, but taken together, they form a popular methodology for building cloud-native applications. The 12 factors are as follows:

  • Code base
  • Dependencies
  • Config
  • Backing services
  • Build, release, run
  • Process
  • Port binding
  • Concurrency
  • Disposability
  • Dev/prod parity
  • Logs
  • Admin processes

I – Code base

The first factor states that a cloud-native application consists of a single code base that is tracked in a version control system, such as Git, and that code base will be deployed multiple times. A deployment might be to a test, staging, or production environment. That doesn't mean that the code in the environments will be identical; a test environment will obviously contain code changes that are proposed but haven't been proven as safe for production, but that is still one code base.

II – Dependencies

It has been common development practice for Java applications to use dependencies stored in Maven repositories such as Maven Central for some time. Tools such as Maven and Gradle require you to express your dependencies in order to build against them. While this practice absolutely requires this, it goes beyond just build-time dependencies to runtime ones as well. A 12-factor application packages its dependencies into the application to ensure that a single development artifact can be reliably deployed in any suitable environment. This means that having an administrator provide the libraries in a well-known place on the filesystem is not acceptable since there is always a chance the administrator-deployed library and the application-required one are not compatible.

When considering this practice, it is important to make a clear decision about what the cloud-native application is, since at some point there will be a split between what the application provides and what the deployment environment provides. This factor triggered a trend in enterprise Java away from WAR files to executable JAR files, since many viewed the application server as an implicit dependency. However, that just shifted the implicit dependency down a level; it didn't remove it. Now the implicit dependency is Java. To a certain extent, containerization addresses this issue and at the same time, it removes the need to rearchitect around an executable JAR file.

III – Config

Since a 12-factor application may have many deployments and each deployment may connect to different systems with different credentials, it is critical that configuration be externalized into the environment. It is also common to read in the media about security issues caused by a developer accidentally checking credentials into a version control system, which would not happen if the configuration was stored externally to the code base.

Although this factor states that configuration is stored in environment variables, there are many who are uneasy about the idea of storing security-sensitive configuration in environment variables. The key thing here is to externalize configuration in a way that can be simply provided in production.

IV – Backing services

Backing services are treated as attached resources. It should be possible to change from one database to another with a simple change in configuration.

V – Build, release, run

All applications go through some kind of build, release, run process, but a 12-factor application has strict separation between those phases. The build phase involves turning the application source into the application artifact. The release phase combines the application artifact with the configuration so it can be deployed. The run phase is when it is actually executing. This strict separation means that a configuration change is never made in the run phase since there is no way to roll it back to the release stage. Instead, if a configuration change is required, a new release is made and run. The same is true if a code change is required. There is no changing the code that is running without going through a build and a run. This makes sure that you always know what is running and can easily reproduce issues or roll back to a prior version.

VI – Process

A 12-factor application consists of one or more stateless processes. This does not mean that each request is mapped to a single process; it is perfectly reasonable in Java to have a single JVM processing multiple requests at the same time. This means that the application should not rely on any one process being available from one request to another. If a single client is making 20 requests, the assumption must be that each request is handled by a separate process with no state being retained between processes. It is a common pattern to store the server-side state associated with a user. This state should always be persisted to an external datastore, so if a follow-on request is sent to a different process, there is no impact on the client.

VII – Port binding

Applications export services via port binding. What this means is that an HTTP application should not rely on being installed into a web container, but instead it should declare a dependency on the HTTP server and cause it to open a port during startup. This has led many to take the view that a 12-factor Java application must be built as an uber-jar, but this is just one realization of the idea of building a single deployment artifact that binds to ports. An alternative and significantly more useful interpretation is to use containers; containers are very much built around the idea of port binding. It should be noted that this practice does not always apply; for example, a microservice driven by a Kafka message would not bind to a port. Also, many FaaS platforms do not provide an API for port binding.

VIII – Concurrency

Concurrency in Java is typically achieved by increasing the resources allocated to a process so more threads can be created. With 12-factor, you increase the number of instances rather than the compute capacity. There is a limit to how easy it is to add compute capacity to a single machine, but adding a new virtual machine of equivalent size is relatively easy. This practice is related to factor VI, so they complement and reinforce each other. Although this could be read to suggest a single process per request model, a Java-based application is more than capable of running multiple threads more efficiently than having a 1:1 ratio between process and request.

IX – Disposability

Every application should be treated as disposable. This means making sure the process starts quickly, shuts down promptly, and copes with termination. Taking this approach makes the application scale out well and quickly, as well as being resilient to unexpected failure, since a process can be quickly and easily restarted from the last release.

X – Dev/prod parity

Lots of application problems manifest themselves because of differences between development and staging environments. In the past, this happened because installing and starting all the downstream software was difficult, but the advent of containers has significantly simplified this experience, making it possible to run many of these systems in earlier environments. The advantage of this is that you no longer experience problems because your development database interprets SQL differently from the dev environment.

XI – Logs

Applications should write logs, and these should be written to the process output as opposed to being written to the filesystem. When deployed, the execution environment will take the process output and forward it to a final destination for viewing and long-term storage. This is very useful in Kubernetes, where logs stored inside the container do not persist if the container is destroyed, and they are easier to obtain using the Kubernetes log function, which follows the process output and not the log files.

XII: Admin processes

Admin processes should be run as one-off processes separate from the application and they should not run in line with application startup. The code for these application processes should be managed with the main application such that the release used for normal flow can be used to execute the admin task. This makes sure the application and the admin code do not diverge.

Other best practices

The concept of the 12-factor application has been around for a while; it is important to remember with any methodology that what works for some people may not work for others, and sometimes the methodology needs to evolve as our understanding of how to be successful does. As a result, several other best practices are often added to the 12 factors discussed previously. The most common relates to the importance of describing the service API and how to test it to ensure that changes to one service do not require the coordinated deployment of client services.

APIs and contract testing

While the 12-factor methodology details a lot of useful practices for the creation and execution of cloud-native applications, it does little to talk about how application services interact and how to ensure that changing one doesn't cause another to need to change. Well-designed and clearly documented APIs are critical to ensuring that changes to a service do not affect the clients.

It isn't enough to just have documentation for the API; it is also important to ensure that changes to the service provider do not negatively affect the client. Since any bug fix could result in a change, it is often possible for the provider to believe a change is safe and accidentally break a client. This is where contract testing can come in. The advantage of contract testing is that each system (the client and the server) can be tested to ensure that changes to either do not violate the contract.

Security

One of the most noticeable gaps in the 12-factor methodology is the lack of best practices around security. From a certain perspective, this is because there is an existing set of best practices for securing applications and these apply as much to cloud-native applications as they do to traditional applications. For example, the third practice on config addresses, at least partly, how to protect credentials (or other secrets) by externalizing them outside of the application, However, this factor doesn't talk about how to securely inject secrets into the environment and how they are stored and secured. Something that depends on the deployment environment. This is discussed in more detail in Chapter 7, MicroProfile Ecosystem with Open Liberty, Docker, and Kubernetes.

Breaking things down into microservices adds additional complexity that doesn't apply in a monolith. With a monolith, you can trust the various components of the application because they are co-deployed often in the same process space. However, when a monolith is broken down into microservices and network connections are used, other mechanisms need to be used to maintain trust. The use of JSON Web Tokens (JWTs) is one such mechanism of managing and establishing trust between microservices. This is discussed in more detail in Chapter 5, Enhancing Cloud-Native Applications.

GraphQL

There is a default assumption involved in much of cloud-native thought that the APIs exposed are REST-based ones. However, this can lead to increased network calls and excessive data being sent across the network. GraphQL is a relatively new innovation that allows a service client to request the exact information it needs from a data store over an HTTP connection. A traditional REST API has to provide all the data about the resource, but often only a subset is required. Network bandwidth and client-side data processing is often wasted when using RESTful APIs since data is provided that the client does not use. GraphQL solves this by allowing the client to send a query to the service requesting exactly the data they need and no more. This reduces the data being transported and fetched from the backing data store. MicroProfile provides a Java-based API for writing a GraphQL backend, which makes it easy to write a service that provides such a query-based API for clients.

Summary

In this chapter, we have learned what a cloud-native application is and learned about some architectures for building them. We have also learned about some best practices for building cloud-native applications and why they exist, so we can determine whether and when to apply them. This provides a good grounding for applying what you'll learn in the rest of the book to be able to be successful in building and deploying cloud-native applications.

In the next chapter, we will explore what MicroProfile is and how it can be used to build cloud-native applications.

You have been reading a chapter from
Practical Cloud-Native Java Development with MicroProfile
Published in: Sep 2021Publisher: PacktISBN-13: 9781801078801
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Authors (5)

author image
Emily Jiang

Emily Jiang is a Java Champion, a cloud-native architect with practical experience of building cloud-native applications. She is a MicroProfile guru, leading a number of MicroProfile specifications as well as the implementations in Open Liberty. She is a well-known international conference speaker.
Read more about Emily Jiang

author image
Andrew McCright

Andy McCright is IBM's Web Services Architect with 20 years of experience building Enterprise Java runtimes. He leads the MicroProfile Rest Client & GraphQL projects and contributes to Open Liberty, Jakarta REST, CXF, RESTEasy, and more. He is also a blogger.
Read more about Andrew McCright

author image
John Alcorn

John Alcorn is an application modernization architect in the Cloud Engagement Hub, specializing in helping customers modernize their traditional Java EE applications to the cloud. He developed and maintains the Stock Trader application that shows how to build a composite application out of MicroProfile-based microservices in Java. You can connect with John via Twitter.
Read more about John Alcorn

author image
David Chan

David Chan is a software developer at IBM who works on the observability and serviceability components of the Open Liberty project. He is involved with the MicroProfile project with a specialization in the MicroProfile Metrics component.
Read more about David Chan

author image
Alasdair Nottingham

Alasdair Nottingham is a software developer and lead architect for Open Liberty, and WebSphere. He has been involved with the MicroProfile and Jakarta EE projects to a varying extent since their inception.
Read more about Alasdair Nottingham