Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Modern API Development with Spring 6 and Spring Boot 3 - Second Edition

You're reading from  Modern API Development with Spring 6 and Spring Boot 3 - Second Edition

Product type Book
Published in Sep 2023
Publisher Packt
ISBN-13 9781804613276
Pages 494 pages
Edition 2nd Edition
Languages
Author (1):
Sourabh Sharma Sourabh Sharma
Profile icon Sourabh Sharma

Table of Contents (21) Chapters

Preface 1. Part 1 – RESTful Web Services
2. Chapter 1: RESTful Web Service Fundamentals 3. Chapter 2: Spring Concepts and REST APIs 4. Chapter 3: API Specifications and Implementation 5. Chapter 4: Writing Business Logic for APIs 6. Chapter 5: Asynchronous API Design 7. Part 2 – Security, UI, Testing, and Deployment
8. Chapter 6: Securing REST Endpoints Using Authorization and Authentication 9. Chapter 7: Designing a User Interface 10. Chapter 8: Testing APIs 11. Chapter 9: Deployment of Web Services 12. Part 3 – gRPC, Logging, and Monitoring
13. Chapter 10: Getting Started with gRPC 14. Chapter 11: gRPC API Development and Testing 15. Chapter 12: Adding Logging and Tracing to Services 16. Part 4 – GraphQL
17. Chapter 13: Getting Started with GraphQL 18. Chapter 14: GraphQL API Development and Testing 19. Index 20. Other Books You May Enjoy

Adding Logging and Tracing to Services

In this chapter, you will learn about logging and tracing tools. We will use Spring Micrometer, Brave, the Elasticsearch, Logstash, and Kibana (ELK) stack, and Zipkin. ELK and Zipkin will be used to implement the distributed logging and tracing of the request/response of API calls. Spring Micrometer with Actuator will be used to inject tracing information into API calls. You will learn how to publish and analyze the logging and tracing of different requests and logs related to responses.

These aggregated logs will help you to troubleshoot web services. You will call one service (such as the gRPC client), which will then call another service (such as the gRPC server), and link them with a trace identifier. Then, using this trace identifier, you can search the centralized logs and debug the request flows. In this chapter, we will use this sample flow. However, the same tracing can be used when service calls require more internal calls. You will...

Technical requirements

You will need the following to develop and execute the code in this chapter:

  • Any Java IDE, such as NetBeans, IntelliJ, or Eclipse
  • Java Development Kit (JDK) 17
  • An internet connection to clone the code and download the dependencies and Gradle
  • Insomnia/cURL (for API testing)
  • Docker and Docker Compose

You can find the code used in this chapter at https://github.com/PacktPublishing/Modern-API-Development-with-Spring-6-and-Spring-Boot-3/tree/dev/Chapter12.

So, let’s begin!

Logging and tracing using the ELK stack

Today, products and services are divided into multiple small parts and executed as separate processes or deployed as separate services, rather than as a monolithic system. An API call may make several other internal API calls. Therefore, you need distributed and centralized logging to trace a request that spans multiple web services. This tracing can be done using the trace identifier (traceId), which can also be referred to as a correlation identifier (correlationId). This identifier is a collection of characters that forms a unique string, which is populated and assigned to an API call that requires multiple inter-service calls. Then, the same trace identifier is propagated to subsequent API calls for tracking purposes.

Errors and issues are imminent in the production system. You need to carry out debugging to ascertain the root cause. One of the key tools associated with debugging is logs. Logs can also give you warnings related to the...

Installing the ELK stack

You can use various methods to install the ELK stack, such as installing individual components as per the operating system, downloading the Docker images and running them individually, or executing the Docker images using Docker Compose, Docker Swarm, or Kubernetes. You are going to use Docker Compose in this chapter.

Let’s understand the grammar of a Docker Compose file before we create the ELK stack Docker Compose file. A Docker Compose file is defined using YAML. The file contains four important top-level keys:

  • version: This denotes the version of the Docker Compose file format. You can use the appropriate version based on the installed Docker Engine. You can check https://docs.docker.com/compose/compose-file/ to ascertain the mapping between the Docker Compose file version and the Docker Engine version.
  • services: This contains one or more service definitions. The service definition represents the service executed by the container and...

Implementing logging and tracing in the gRPC code

Logging and tracing go hand in hand. Logging in the application code is already taken care of by default. You use Logback for logging. Logs are either configured to display on the console or pushed to the filesystem. However, you also need to push the logs to the ELK stack for indexing and analysis. For this purpose, you make certain changes to the Logback configuration file, logback-spring.xml, to push the logs to Logstash. On top of that, these logs should also contain tracking information.

Correlation/trace identifiers should be populated and propagated in distributed transactions for tracing purposes. A distributed transaction refers to the main API call that internally calls other services to serve the request. Before Spring Boot 3, Spring provided distributed tracing support through the Spring Cloud Sleuth library; now, tracing support is provided by Spring Micrometer. It generates the trace ID along with the span identifier...

Distributed tracing with Zipkin and Micrometer

Spring Micrometer is a utility library that collects the metrics generated by the Spring Boot application. It provides vendor-neutral APIs that allow you to export the collected metrics to different systems, such as ELK. It collects different types of metrics. A few of them are the following:

  • Metrics related to the JVM, CPU, and cache
  • Latencies in Spring MVC, WebFlux, and the REST client
  • Metrics related to Datasource and HikariCP
  • Uptime and Tomcat usage
  • Events logged to Logback

Zipkin, along with Micrometer, helps you not only to trace transactions across multiple service invocations but also to capture the response time taken by each service involved in the distributed transaction. Zipkin also shows this information using nice graphs. It helps you to locate the performance bottlenecks and drill down into the specific API call that creates the latency issue. You can find out the total time taken by the main...

Summary

In this chapter, you learned how the trace/correlation ID is important and how it can be set up using Micrometer with Brave. You can use these generated IDs to find the relevant logs and API call durations. You integrated the Spring Boot services with the ELK stack and Zipkin.

You also implemented extra code and configurations, which are required for enabling distributed tracing for gRPC-based services.

You acquired log aggregation and distributed tracing skills using Micrometer, Brave, the ELK stack, and Zipkin.

In the next chapter, you are going to learn about the fundamentals of GraphQL APIs.

Questions

  1. What is the difference between the trace ID and span ID?
  2. Should you use a broker between services that generate the logs and the ELK stack? If yes, why?
  3. How does Zipkin work?

Answers

  1. Trace IDs and span IDs are created when the distributed transaction is initiated. A trace ID is generated for the main API call by the receiving service using Spring Cloud Sleuth. A trace ID is generated only once for each distributed call. Span IDs are generated by all the services participating in the distributed transaction. A trace ID is a correlation ID that will be common across the service for a call that requires a distributed transaction. Each service will have its own span ID for each of the API calls.
  2. Yes, a broker such as Kafka, RabbitMQ, or Redis allows robust persistence of logs and removes the risk of losing log data in unavoidable circumstances. It also performs better and can handle sudden spikes of data.
  3. A tracer such as Micrometer with Brave or Spring Cloud Sleuth (which performs instrumentation) does two jobs – records the time and metadata of the call being performed, and propagates the trace IDs to other services participating in the...
lock icon The rest of the chapter is locked
You have been reading a chapter from
Modern API Development with Spring 6 and Spring Boot 3 - Second Edition
Published in: Sep 2023 Publisher: Packt ISBN-13: 9781804613276
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €14.99/month. Cancel anytime}