Observability Needs of Modern Applications
With the increasing complexity of distributed systems, we need better tools to build and operate our applications. Distributed tracing is one such technique that allows you to collect structured and correlated telemetry with minimum effort and enables observability vendors to build powerful analytics and automation.
In this chapter, we’ll explore common observability challenges and see how distributed tracing brings observability to our systems where logs and counters can’t. We’ll see how correlation and causation along with structured and consistent telemetry help answer arbitrary questions about the system and mitigate issues faster.
Here’s what you will learn:
- An overview of monitoring techniques using counters, logs, and events
- Core concepts of distributed tracing – the span and its structure
- Context propagation standards
- How to generate meaningful and consistent telemetry
- How to use distributed tracing along with metrics and logs for performance analysis and debugging
By the end of this chapter, you will become familiar with the core concepts and building blocks of distributed tracing, which you will be able to use along with other telemetry signals to debug functional issues and investigate performance issues in distributed applications.
Understanding why logs and counters are not enough
Monitoring and observability cultures vary across the industry; some teams use ad hoc debugging with printf
while others employ sophisticated observability solutions and automation. Still, almost every system uses a combination of common telemetry signals: logs, events, metrics or counters, and profiles. Telemetry collection alone is not enough. A system is observable if we can detect and investigate issues, and to achieve this, we need tools to store, index, visualize, and query the telemetry, navigate across different signals, and automate repetitive analysis.
Before we begin exploring tracing and discovering how it helps, let’s talk about other telemetry signals and their limitations.
Logs
A log is a record of some event. Logs typically have a timestamp, level, class name, and formatted message, and may also have a property bag with additional context.
Logs are a low-ceremony tool, with plenty of logging libraries and tools for any ecosystem.
Common problems with logging include the following:
- Verbosity: Initially, we won’t have enough logs, but eventually, as we fill gaps, we will have too many. They become hard to read and expensive to store.
- Performance: Logging is a common performance issue even when used wisely. It’s also very common to serialize objects or allocate strings for logging even when the logging level is disabled.
One new log statement can take your production down; I did it once. The log I added was written every millisecond. Multiplied by a number of service instances, it created an I/O bottleneck big enough to significantly increase latency and the error rate for users.
- Not queryable: Logs coming from applications are intended for humans. We can add context and unify the format within our application and still only be able to filter logs by context properties. Logs change with every refactoring, disappear, or become out of date. New people joining a team need to learn logging semantics specific to a system, and the learning curve can be steep.
- No correlation: Logs for different operations are interleaved. The process of finding logs describing certain operations is called correlation. In general, log correlation, especially across services, must be implemented manually (spoiler: not in ASP.NET Core).
Note
Logs are easy to produce but are verbose, and then can significantly impact performance. They are also difficult to filter, query, or visualize.
To be accessible and useful, logs are sent to some central place, a log management system, which stores, parses, and indexes them so they can be queried. This implies that your logs need to have at least some structure.
ILogger
in .NET supports structured logging, as we’ll see in Chapter 8, Writing Structured and Correlated Logs, so you get the human-readable message, along with the context. Structured logging, combined with structured storage and indexing, converts your logs into rich events that you can use for almost anything.
Events
An event is a structured record of something. It has a timestamp and a property bag. It may have a name, or that could just be one of the properties.
The difference between logs and events is semantical – an event is structured and usually follows a specific schema.
For example, an event that describes adding an item to a shopping bag should have a well-known name, such as shopping_bag_add_item
with user-id
and item-id
properties. Then, you can query them by name, item, and user. For example, you can find the top 10 popular items across all users.
If you write it as a log message, you’d probably write something like this:
If your logging provider captures individual properties, you would get the same context as with events. So, now we can find every log for this user and item, which probably includes other logs not related to adding an item.
Note
Events with consistent schema can be queried efficiently but have the same verbosity and performance problems as logs.
Metrics and counters
Logs and events share the same problem – verbosity and performance overhead. One way to solve them is aggregation.
A metric is a value of something aggregated by dimensions and over a period of time. For example, a request latency metric can have an HTTP route, status code, method, service name, and instance dimensions.
Common problems with metrics include the following:
- Cardinality: Each combination of dimensions is a time series, and aggregation happens within one time series. Adding a new dimension causes a combinatorial explosion, so metrics must have low cardinality – that is, they cannot have too many dimensions, and each one must have a small number of distinct values. As a result, you can’t measure granular things such as per-user experience with metrics.
- No causation: Metrics only show correlation and no cause and effect, so they are not a great tool to investigate issues.
As an expert on your system, you might use your intuition to come up with possible reasons for certain types of behavior and then use metrics to confirm your hypothesis.
- Verbosity: Metrics have problems with verbosity too. It’s common to add metrics that measure just one thing, such as
queue_is_full
orqueue_is_empty
. Something such asqueue_utilization
would be more generic. Over time, the number of metrics grows along with the number of alerts, dashboards, and team processes relying on them.
Note
Metrics have low impact on performance, low volume that doesn’t grow much with scale, low storage costs, and low query time. They are great for dashboards and alerts but not for issue investigation or granular analytics.
A counter is a single time series – it’s a metric without dimensions, typically used to collect resource utilization such as CPU load or memory usage. Counters don’t work well for application performance or usage, as you need a dedicated counter per each combination of attributes, such as HTTP route, status code, and method. It is difficult to collect and even harder to use. Luckily, .NET supports metrics with dimensions, and we will discuss them in Chapter 7, Adding Custom Metrics.
What’s missing?
Now you know all you need to monitor a monolith or small distributed system – use metrics for system health analysis and alerts, events for usage, and logs for debugging. This approach has taken the tech industry far, and there is nothing essentially wrong with it.
With up-to-date documentation, a few key performance and usage metrics, concise, structured, correlated, and consistent events, common conventions, and tools across all services, anyone operating your system can do performance analysis and debug issues.
Note
So, the ultimate goal is to efficiently operate a system, and the problem is not a specific telemetry signal or its limitations but a lack of standard solutions and practices, correlation, and structure for existing signals.
Before we jump into distributed tracing and see how its ecosystem addresses these gaps, let’s summarize the new requirements we have for the perfect observability solution we intend to solve with tracing and the new capabilities it brings. Also, we should keep in mind the old capabilities – low-performance overhead and manageable costs.
Systematic debugging
We need to be able to investigate issues in a generic way. From an error report to an alert on a metric, we should be able to drill down into the issue, follow specific requests end to end, or bubble up from an error deep in the stack to understand its effect on users.
All this should be reasonably easy to do when you’re on call and paged at 2AM to resolve an incident in production.
Answering ad hoc questions
I might want to understand whether users from Redmond, WA, who purchased a product from my website are experiencing longer delivery times than usual and why – because of the shipment company, rain, cloud provider issues in this region, or anything else.
It should not be required to add more telemetry to answer most of the usage or performance questions. Occasionally, you’d need to add a new context property or an event, but it should be rare on a stable code path.
Self-documenting systems
Modern systems are dynamic – with continuous deployments, feature flag changes in runtime, and dozens of external dependencies with their own instabilities, nobody can know everything.
Telemetry becomes your single source of truth. Assuming it has enough context and common semantics, an observability vendor should be able to visualize it reasonably well.
Auto-instrumentation
It’s difficult to instrument everything in your system – it’s repetitive, error-prone, and hard to keep up to date, test, and enforce common schema and semantics. We need shared instrumentations for common libraries, while we would only add application-specific telemetry and context.
With an understanding of these requirements, we will move on to distributed tracing.
Introducing distributed tracing
Distributed tracing is a technique that brings structure, correlation and causation to collected telemetry. It defines a special event called span and specifies causal relationships between spans. Spans follow common conventions that are used to visualize and analyze traces.
Span
A span describes an operation such as an incoming or outgoing HTTP request, a database call, an expensive I/O call, or any other interesting call. It has just enough structure to represent anything and still be useful. Here are the most important span properties:
- The span’s name should describe the operation type in human-readable format, have low cardinality, and be human-readable.
- The span’s start time and duration.
- The status indicates success, failure, or no status.
- The span kind distinguishes the client, server, and internal calls, or the producer and consumer for async scenarios.
- Attributes (also known as tags or annotations) describe specific operations.
- Span context identifies spans and is propagated everywhere, enabling correlation. A parent span identifier is also included on child spans for causation.
- Events provide additional information about operations within a span.
- Links connect traces and spans when parent-child relationships don’t work – for example, for batching scenarios.
Note
In .NET, the tracing span is represented by System.Diagnostics.Activity
. The System.Span
class is not related to distributed tracing.
Relationships between spans
A span is a unit of tracing, and to trace more complex operations, we need multiple spans.
For example, a user may attempt to get an image and send a request to the service. The image is not cached, and the service requests it from the cold storage (as shown in Figure 1.1):

Figure 1.1 – A GET image request flow
To make this operation debuggable, we should report multiple spans:
- The incoming request
- The attempt to get the image from the cache
- Image retrieval from the cold storage
- Caching the image
These spans form a trace – a set of related spans fully describing a logical end-to-end operation sharing the same trace-id
. Within the trace, each span is identified by span-id
. Spans include a pointer to a parent span – it’s just their parent’s span-id
.
trace-id
, span-id
, and parent-span-id
allow us to not only correlate spans but also record relationships between them. For example, in Figure 1.2, we can see that Redis GET
, SETEX
, and HTTP GET
spans are siblings and the incoming request is their parent:

Figure 1.2 – Trace visualization showing relationships between spans
Spans can have more complicated relationships, which we’ll talk about later in Chapter 6, Tracing Your Code.
Span context (aka trace-id
and span-id
) enables even more interesting cross-signal scenarios. For example, you can stamp parent span context on logs (spoiler: just configure ILogger
to do it) and you can correlate logs to traces. For example, if you use ConsoleProvider
, you will see something like this:

Figure 1.3 – Logs include span context and can be correlated to other signals
You could also link metrics to traces using exemplars – metric metadata containing the trace context of operations that contributed to a recorded measurement. For instance, you can check examples of spans that correspond to the long tail of your latency distribution.
Attributes
Span attributes are a property bag that contains details about the operation.
Span attributes should describe this specific operation well enough to understand what happened. OpenTelemetry semantic conventions specify attributes for popular technologies to help with this, which we’ll talk about in the Ensuring consistency and structure section later in this chapter.
For example, an incoming HTTP request is identified with at least the following attributes: the HTTP method, path, query, API route, and status code:

Figure 1.4 – The HTTP server span attributes
Instrumentation points
So, we have defined a span and its properties, but when should we create spans? Which attributes should we put on them? While there is no strict standard to follow, here’s the rule of thumb:
Create a new span for every incoming and outgoing network call and use standard attributes for the protocol or technology whenever available.
This is what we’ve done previously with the memes example, and it allows us to see what happened on the service boundaries and detect common problems: dependency issues, status, latency, and errors on each service. This also allows us to correlate logs, events, and anything else we collect. Plus, observability backends are aware of HTTP semantics and will know how to interpret and visualize your spans.
There are exceptions to this rule, such as socket calls, where requests could be too small to be instrumented. In other cases, you might still be rightfully concerned with verbosity and the volume of generated data – we’ll see how to control it with sampling in Chapter 5, Configuration and Control Plane.
Tracing – building blocks
Now that you are familiar with the core concepts of tracing and its methodology, let’s talk about implementation. We need a set of convenient APIs to create and enrich spans and pass context around. Historically, every Application Performance Monitoring (APM) tool had its own SDKs to collect telemetry with their own APIs. Changing the APM vendor meant rewriting all your instrumentation code.
OpenTelemetry solves this problem – it’s a cross-language telemetry platform for tracing, metrics, events, and logs that unifies telemetry collection. Most of the APM tools, log management, and observability backends support OpenTelemetry, so you can change vendors without rewriting any instrumentation code.
.NET tracing implementation conforms to the OpenTelemetry API specification, and in this book, .NET tracing APIs and OpenTelemetry APIs are used interchangeably. We’ll talk about the difference between them in Chapter 6, Tracing Your Code.
Even though OpenTelemetry primitives are baked into .NET and the instrumentation code does not depend on them, to collect telemetry from the application, we still need to add the OpenTelemetry SDK, which has everything we need to configure a collection and an exporter. You might as well write your own solution compatible with .NET tracing APIs.
OpenTelemetry became an industry standard for tracing and beyond; it’s available in multiple languages, and in addition to a unified collection of APIs it provides configurable SDKs and a standard wire format for the telemetry – OpenTelemetry protocol (OTLP). You can send telemetry to any compatible vendor, either by adding a specific exporter or, if the backend supports OTLP, by configuring the vendor’s endpoint.
As shown in Figure 1.5, the application configures the OpenTelemetry SDK to export telemetry to the observability backend. Application code, .NET libraries, and various instrumentations use .NET tracing APIs to create spans, which the OpenTelemetry SDK listens to, processes, and forwards to an exporter.

Figure 1.5 – Tracing building blocks
So, OpenTelemetry decouples instrumentation code from the observability vendor, but it does much more than that. Now, different applications can share instrumentation libraries and observability vendors have unified and structured telemetry on top of which they can build rich experiences.
Instrumentation
Historically, all APM vendors had to instrument popular libraries: HTTP clients, web frameworks, Entity Framework, SQL clients, Redis client libraries, RabbitMQ, cloud providers’ SDKs, and so on. That did not scale well. But with .NET tracing APIs and OpenTelemetry semantics, instrumentation became common for all vendors. You can find a growing list of shared community instrumentations in the OpenTelemetry Contrib repo: https://github.com/open-telemetry/opentelemetry-dotnet-contrib.
Moreover, since OpenTelemetry is a vendor-neutral standard and baked into .NET, it’s now possible for libraries to implement native instrumentation – HTTP and gRPC clients, ASP.NET Core, and several other libraries support it.
Even with native tracing support, it’s off by default – you need to install and register specific instrumentation (which we’ll cover in Chapter 2, Native Monitoring in .NET). Otherwise, tracing code does nothing and, thus, does not add any performance overhead.
Backends
The observability backend (aka monitoring, APM tool, and log management system) is a set of tools responsible for ingestion, storage, indexing, visualization, querying, and probably other things that help you monitor your system, investigate issues, and analyze performance.
Observability vendors build these tools and provide rich user experiences to help you use traces along with other signals.
Collecting traces for common libraries became easy with the OpenTelemetry ecosystem. As you’ll see in Chapter 2, Native Monitoring in .NET, most of it can be done automatically with just a few lines of code at startup. But how do we use them?
While you can send spans to stdout
and store them on the filesystem, this would not leverage all tracing benefits. Traces can be huge, but even when they are small, grepping them is not convenient.
Tracing visualizations (such as a Gantt chart, trace viewer, or trace timeline) is one of the common features tracing providers have. Figure 1.6 shows a trace timeline in Jaeger – an open source distributed tracing platform:

Figure 1.6 – Trace visualization in Jaeger with errors marked with exclamation point
While it may take a while to find an error log, the visualization shows what’s important – where failures are, latency, and a sequence of steps. As we can see in Figure 1.6, the frontend call failed because of failure on the storage side, which we can further drill into.
However, we can also see that the frontend made four consecutive calls into storage, which potentially could be done in parallel to speed things up.
Another common feature is filtering or querying by any of the span properties such as name, trace-id
, span-id
, parent-id
, name, attribute name, status, timestamp, duration, or anything else. An example of such a query is shown in Figure 1.7:

Figure 1.7 – A custom Azure Monitor query that calculates the Redis hit rate
For example, we don’t report a metric for the cache hit rate, but we can estimate it from traces. While they’re not precise because of sampling and might be more expensive to query than metrics, we can still do it ad hoc, especially when we investigate specific failures.
Since traces, metrics, and logs are correlated, you will fully leverage observability capabilities if your vendor supports multiple signals or integrates well with other tools.