Search icon CANCEL
Cart icon
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Learning Hub
Free Learning
Arrow right icon
gRPC Go for Professionals
gRPC Go for Professionals

gRPC Go for Professionals: Implement, test, and deploy production-grade microservices

By Clément Jean
$35.99 $24.99
Book Jul 2023 260 pages 1st Edition
$35.99 $24.99
Free Trial
Renews at $15.99p/m
$35.99 $24.99
Free Trial
Renews at $15.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon AI Assistant (beta) to help accelerate your learning
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

gRPC Go for Professionals

Protobuf Primer

As we now understand the basic networking concepts behind gRPC, we can touch upon another pillar in the construction of your gRPC APIs. This pillar is Protocol Buffers, more commonly known as Protobuf. It is an important part of the communication process because, as we saw in the previous chapter, every message is encoded into binary, and this is exactly what Protobuf is doing for us in gRPC. In this chapter, the goal is to understand what Protobuf is and why it is needed for high-efficiency communication. Finally, we are going to look at some details concerning the serialization and deserialization of messages.

In this chapter, we’re going to cover the following main topics:

  • Protobuf is an Interface Description Language (IDL)
  • Serialization/deserialization
  • Protobuf versus JSON
  • Encoding details
  • Common types
  • Services


You can find the code for this chapter at In this chapter, we are going to discuss how Protocol Buffers serializes and deserializes data. While this can be done by writing code, we are going to stay away from that in order to learn how to use the protoc compiler to debug and optimize our Protobuf schemas. Thus, if you want to reproduce the examples specified, you will need to download the protoc compiler from the Protobuf GitHub Releases page ( The easiest way to get started is to download the binary releases. These releases are named with this convention: protoc-${VERSION}-${OS}-{ARCHITECTURE}. Uncompress the zip file and follow the readme.txt instructions (note: we do intend to use Well-Known Types in the future so make sure you also install the includes). After that, you should be able to run the following command:

$ protoc --version

Finally, as always, you will be able to find the companion code in the GitHub repository under the folder for the current chapter (chapter2).

Protobuf is an IDL

Protobuf is a language. More precisely, it is an IDL. It is important to make such a distinction because, as we will see more in detail later, in Protobuf, we do not write any logic the way we do in a programming language, but instead, we write data schemas, which are contracts to be used for serialization and are to be fulfilled by deserialization. So, before explaining all the rules that we need to follow when writing a .proto file and going through all the details about serialization and deserialization, we need to first get a sense of what an IDL is and what is the goal of such a language.

An IDL, as we saw earlier, is an acronym for Interface Description Language, and as we can see, the name contains three parts. The first part, Interface, describes a piece of code that sits in between two or more applications and hides the complexity of implementation. As such, we do not make any assumptions about the hardware on which an application is running, the OS on which it runs, and in which programming language it is written. This interface is, by design, hardware-, OS-, and language-agnostic. This is important for Protobuf and several other serialization data schemas because it lets developers write the code once and it can be used across different projects.

The second part is Description, and this sits on top of the concept of Interface. Our interface is describing what the two applications can expect to receive and what they are expected to send to each other. This includes describing some types and their properties, the relationship between these types, and the way these types are serialized and deserialized. As this may be a bit abstract, let us look at an example in Protobuf. If we wanted to create a type called Account that contains an ID, a username, and the rights this account has, we could write the following:

syntax = "proto3";
enum AccountRight {
message Account {
  uint64 id = 1;
  string username = 2;
  AccountRight right = 3;

If we skip some of the details that are not important at this stage, we can see that we define the following:

  • An enumeration listing all the possible rights and an extra role called ACCOUNT_RIGHT_UNSPECIFIED
  • A message (equivalent to a class or struct) listing the three properties that an Account type should have

Again, without looking at the details, it is readable, and the relationship between Account and AccountRight is easy to understand.

Finally, the last part is Language. This is here to say that, as with every language—computer ones or not—we have rules that we need to follow so that another human, or a compiler, can understand our intent. In Protobuf, we write our code to please the compiler (protoc), and then it does all the heavy lifting for us. It will read our code and generate code in the language that we need for our application, and then our user code will interact with the generated code. Let us look at a simplified output of what the Account type defined previously would give in Go:

type AccountRight int32
const (
  AccountRight_ACCOUNT_RIGHT_UNSPECIFIED AccountRight = 0
  AccountRight_ACCOUNT_RIGHT_READ AccountRight = 1
  AccountRight_ACCOUNT_RIGHT_READ_WRITE AccountRight = 2
  AccountRight_ACCOUNT_RIGHT_ADMIN AccountRight = 3
type Account struct {
  Id uint64 `protobuf:"varint,1,…`
  Username string `protobuf:"bytes,2,…`
  Right AccountRight `protobuf:"varint,3,…`

In this code, there are important things to notice. Let us break this code into pieces:

type AccountRight int32
const (
  AccountRight_ACCOUNT_RIGHT_UNSPECIFIED AccountRight = 0
  AccountRight_ACCOUNT_RIGHT_READ AccountRight = 1
  AccountRight_ACCOUNT_RIGHT_READ_WRITE AccountRight = 2
  AccountRight_ACCOUNT_RIGHT_ADMIN AccountRight = 3

Our AccountRight enum is defined as constants with values of type int32. Each enum variant’s name is prefixed with the name of the enum, and each constant has the value that we set after the equals sign in the Protobuf code. These values are called field tags, and we will introduce them later in this chapter.

Now, take a look at the following code:

type Account struct {
  Id uint64 `protobuf:"varint,1,…`
  Username string `protobuf:"bytes,2,…`
  Right AccountRight `protobuf:"varint,3,…`

Here, we have our Account message transpiled to a struct with Id, Username, and Right exported fields. Each of these fields has a type that is converted from a Protobuf type to a Golang type. In our example here, Go types and Protobuf types have the exact same names, but it is important to know that in some cases, the types will translate differently. Such an example is double in Protobuf, which will translate to float64 for Go. Finally, we have the field tags, referenced in the metadata following the field. Once again, their meaning will be explained later in this chapter.

So, to recapitulate, an IDL is a piece of code sitting between different applications and describing objects and their relationships by following certain defined rules. This IDL, in the case of Protobuf, will be read, and it will be used to generate code in another language. And after that, this generated code will be used by the user code to serialize and deserialize data.

Serialization and deserialization

Serialization and deserialization are two concepts that are used in many ways and in many kinds of applications. This section is going to discuss these two concepts in the context of Protobuf. So, even if you feel confident about your understanding of these two notions, it is important to get your head straight and understand them properly. Once you do, it will be easier to deal with the Encoding details section where we are going to delve deeper into how Protobuf serializes and deserializes data under the hood.

Let us start with serialization and then let us touch upon deserialization, which is just the opposite process. The goal of serialization is to store data, generally in a more compact or readable representation, to use it later. For Protobuf, this serialization happens on the data that you set in your generated code’s objects. For example, if we set the Id, Username, and Right fields in our Account struct, this data will be what Protobuf will work on. It will turn each field into a binary representation with different algorithms depending on the field type. And after that, we use this in-memory binary to either send data over the network (with gRPC, for example) or store it in more persistent storage.

Once it is time for us to use this serialized data again, Protobuf will perform deserialization. This is the process of reading the binary created earlier and populating the data back into an object in your favorite programming language to be able to act on it. Once again, Protobuf will use different algorithms depending on the type of data to read the underlying binary and know how to set or not set each of the fields of the object in question.

To summarize, Protobuf performs binary serialization to make data more compact than other formats such as XML or JSON. To do so, it will read data from the different fields of the generated code’s object, turn it into binary with different algorithms, and then when we finally need the data, Protobuf will read the data and populate the fields of a given object.

Protobuf versus JSON

If you’ve already worked on the backend or even frontend, there is a 99.99 percent chance that you’ve worked with JSON. This is by far the most popular data schema out there and there are reasons why it is the case. In this section, we are going to discuss the pros and cons of both JSON and Protobuf and we are going to explain which one is more suitable for which situation. The goal here is to be objective because as engineers, we need to be to choose the right tool for the right job.

As we could write chapters about the pros and cons of each technology, we are going to reduce the scope of these advantages and disadvantages to three categories. These categories are the ones that developers care the most about when developing applications, as detailed here:

  • Size of serialized data: We want to reduce the bandwidth when sending data over the network
  • Readability of the data schema and the serialized data: We want to be able to have a descriptive schema so that newcomers or users can quickly understand it, and we want to be able to visualize the data serialized for debugging or editing purposes
  • Strictness of the schema: This quickly becomes a requirement when APIs grow, and we need to ensure the correct type of data is being sent and received between different applications

Serialized data size

In serialization, the Holy Grail is, in a lot of use cases, reducing the size of your data. This is because most often, we want to send that data to another application across the network, and the lighter the payload, the faster it should arrive on the other side. In this space, Protobuf is the clear winner against JSON. This is the case because JSON serializes to text whereas Protobuf serializes to binary and thus has more room to improve how compact the serialized data is. An example of that is numbers. If you set a number to the id field in JSON, you would get something like this:

{ id: 123 }

First, we have some boilerplate with the braces, but most importantly we have a number that takes three characters, or three bytes. In Protobuf, if we set the same value to the same field, we would get the hexadecimal shown in the following callout.

Important note

In the chapter2 folder of the companion GitHub repository, you will find the files need to reproduce all the results in this chapter. With protoc, we will be able to display the hexadecimal representation of our serialized data. To do that, you can run the following command:

Linux/Mac: cat ${INPUT_FILE_NAME}.txt | protoc --encode=${MESSAGE_NAME} ${PROTO_FILE_NAME}.proto | hexdump –C

Windows (PowerShell): (Get-Content ${INPUT_FILE_NAME}.txt | protoc --encode=${MESSAGE_NAME} ${PROTO_FILE_NAME}.proto) -join "`n" | Format-Hex

For example:

$ cat account.txt | protoc --encode=Account account.proto | hexdump -C

00000000 08 7b |.{|


Right now, this might look like magic numbers, but we are going to see in the next section how it is encoded into two bytes. Now, two bytes instead of three might look negligible but imagine this kind of difference at scale, and you would have wasted millions of bytes.


The next important thing about data schema serialization is readability. However, readability is a little bit too broad, especially in the context of Protobuf. As we saw, as opposed to JSON, Protobuf separates the schema from the serialized data. We write the schema in a .proto file and then the serialization will give us some binary. In JSON, the schema is the actual serialized data. So, to be clearer and more precise about readability, let us split readability into two parts: the readability of the schema and the readability of the serialized data.

As for the readability of the schema, this is a matter of preference, but there are a few points that make Protobuf stand out. The first one of them is that Protobuf can contain comments, and this is nice to have for extra documentation describing requirements. JSON does not allow comments in the schema, so we must find a different way to provide documentation. Generally, it is done with GitHub wikis or other external documentation platforms. This is a problem because this kind of documentation quickly becomes outdated when the project and the team working on it get bigger. A simple oversight and your documentation do not describe the real state of your API. With Protobuf, it is still possible to have outdated documentation, but as the documentation is closer to the code, it provides more incentive and awareness to change the related comment.

The second feature that makes Protobuf more readable is the fact that it has explicit types. JSON has types but they are implicit. You know that a field contains a string if its value is surrounded by double quotes, a number when the value is only digits, and so on. In Protobuf, especially for numbers, we get more information out of types. If we have an int32 type, we can obviously know that this is a number, but on top of that, we know that it can accept negative numbers and we are able to know the range of numbers that can be stored in this field. Explicit types are important not only for security (more on that later) but also for letting the developer know the details of each field and letting them describe accurately their schemas to fulfill the business requirements.

For readability of the schema, I think we can agree that Protobuf is the winner here because it can be written as self-documenting code and we get explicit types for every field in objects.

As for the readability of serialized data, JSON is the clear winner here. As mentioned, JSON is both the data schema and the serialized data. What you see is what you get. Protobuf, however, serializes the data to binary, and it is way harder to read that, even if you know how Protobuf serializes and deserializes data. In the end, this is a trade-off between readability and serialized data size here. Protobuf will outperform JSON on serialized data and is way more explicit on the readability of the data schema. However, if you need human-readable data that can be edited by hand, Protobuf is not the right fit for your use case.

Schema strictness

Finally, the last category is the strictness of the schema. This is usually a nice feature to have when your team and your project scale because it ensures that the schema is correctly populated, and for a certain target language, it shortens the feedback loop for the developers.

Schemas are always valid ones because every field has an explicit type that can only contain certain values. We simply cannot pass a string to a field that was expecting a number or a negative number to a field that was expecting a positive number. This is enforced in the generated code by either runtime checks for dynamic languages or at compile time for typed languages. In our case, since Go is a typed language, we will have compile-time checks.

And finally, in typed languages, a schema shortens the feedback loop because instead of having a runtime check that might or might not trigger an error, we simply have a compilation error. This makes our software more reliable, and developers can feel confident that if they were able to compile, the data set into the object would be valid.

In pure JSON, we cannot ensure that our schema is correct at compile time. Most often, developers will add extra configurations such as JSON Schema to have this kind of assurance at runtime. This adds complexity to our project and requires every developer to be disciplined because they could simply go about their code without developing the schema. In Protobuf, we do schema-driven development. The schema comes first, and then our application revolves around the generated types. Furthermore, we have assurance at compile time that the values that we set are correct and we do not need to replicate the setup to all our microservices or subprojects. In the end, we spend less time on configuration and we spend more time thinking about our data schemas and the data encoding.

Encoding details

Up until now, we talked a lot about “algorithms”; however, we did not get too much into the specifics. In this section, we are going to see the major algorithms that are behind the serialization and deserialization processes in Protobuf. We are first going to see all the types that we can use for our fields, then with that, we are going to divide them into three categories, and finally, we are going to explain which algorithm is used for each category.

In Protobuf, types that are considered simple and that are provided by Protobuf out of the box are called scalar types. We can use 15 of such types, as listed here:

  • int32
  • int64
  • uint32
  • uint64
  • sint32
  • sint64
  • fixed32
  • fixed64
  • sfixed32
  • sfixed64
  • double
  • float
  • string
  • bytes
  • bool

And out of these 15 types, 10 are for integers (the 10 first ones). These types might be intimidating at first, but do not worry too much about how to choose between them right now; we are going to discuss that throughout this section. The most important thing to understand right now is that two-thirds of the types are for integers, and this shows what Protobuf is good at—encoding integers.

Now that we know the scalar types, let us separate these types into three categories. However, we are not here to make simple categories such as numbers, arrays, and so on. We want to make categories that are related to the Protobuf serialization algorithms. In total, we have three: fixed-size numbers, variable-size integers (varints), and length-delimited types. Here is a table with each category populated:

Fixed-size numbers


Length-delimited types














Let’s go through each now.

Fixed-size numbers

The easiest one to understand for developers who are used to typed languages is fixed-size numbers. If you worked with lower-level languages in which you tried to optimize storage space, you know that we can, on most hardware, store an integer in 32 bits (4 bytes) or in 64 bits (8 bytes). fixed32 and fixed64 are just binary representations of a normal number that you would have in languages that give you control over the storage size of your integers (for example, Go, C++, Rust, and so on). If we serialize the number 42 into a fixed32 type, we will have the following:

$ cat fixed.txt | protoc --encode=Fixed32Value
  wrappers.proto | hexdump -C
00000000  0d 2a 00 00 00                          |.*...|

Here, 2a is 42, and 0d is a combination of the field tag and the type of the field (more about that later in this section). In the same manner, if we serialize 42 in a fixed64 type, we will have the following:

$ cat fixed.txt | protoc --encode=Fixed64Value
  wrappers.proto | hexdump -C
00000000  09 2a 00 00 00 00 00 00  00         |.*.......|

And the only thing that changed is the combination between the type of the field and the field tag (09). This is mostly because we changed the type to 64-bit numbers.

Two other scalar types that are easy to understand are float and double. Once again, Protobuf produces the binary representation of these types. If we encode 42.42 as float, we will get the following output:

$ cat floating_point.txt | protoc --encode=FloatValue
  wrappers.proto | hexdump -C
00000000  0d 14 ae 29 42                          |...)B|

In this case, this is a little bit more complicated to decode, but this is simply because float numbers are encoded differently. If you are interested in this kind of data storage, you can look at the IEEE Standard for Floating-Point Arithmetic (IEEE 754), which explains how a float is formed in memory. What is important to note here is that floats are encoded in 4 bytes, and in front, we have our tag + type. And for a double type with a value of 42.42, we will get the following:

$ cat floating_point.txt | protoc --encode=DoubleValue
  wrappers.proto | hexdump -C
00000000  09 f6 28 5c 8f c2 35 45  40         |..(\..5E@|

This is encoded in 8 bytes and the tag + type. Note that the tag + type also changed here because we are in the realm of 64-bit numbers.

Finally, we are left with sfixed32 and sfixed64. We did not mention it earlier, but fixed32 and fixed64 are unsigned numbers. This means that we cannot store negative numbers in fields with these types. sfixed32 and sfixed64 solve that. So, if we encode –42 in a sfixed32 type, we will have the following:

$ cat sfixed.txt | protoc --encode=SFixed32Value
  wrappers.proto | hexdump -C
00000000  0d d6 ff ff ff                          |.....|

This is obtained by taking the binary for 42, flipping all the bits (1’s complement), and adding one (2’s complement). Otherwise, if you serialize a positive number, you will have the same binary as the fixed32 type. Then, if we encode –42 in a field with type sfixed64, we will get the following:

$ cat sfixed.txt | protoc --encode=SFixed64Value
  wrappers.proto | hexdump -C
00000000  09 d6 ff ff ff ff ff ff  ff         |.........|

This is like the sfixed32 type, only the tag + type was changed.

To summarize, fixed integers are simple binary representations of integers that resemble how they are stored in most computers’ memory. As their name suggests, their serialized data will always be serialized into the same number of bytes. For some use cases, this is fine to use such representations; however, in most cases, we would like to reduce the number of bits that are just here for padding. And in these use cases, we will use something called varints.


Now that we have seen fixed integers, let us move to another type of serialization for numbers: variable-length integers. As its name suggests, we will not get a fixed number of bytes when serializing an integer.

To be more precise, the smaller the integer, the smaller the number of bytes it will be serialized into, and the bigger the integer, the larger the number of bytes. Let us look at how the algorithm works.

In this example, let us serialize the number 300. To start, we are going to take the binary representation of that number:


With this binary, we can now split it into groups of 7 bits and pad with zeros if needed:


Now, since we lack 2 more bits to create 2 bytes, we are going to add 1 as the most significant bit (MSB) for all the groups except the first one, and we are going to add 0 as the MSB for the first group:


These MSBs are continuation bits. This means that, when we have 1, we still have 7 bits to read after, and if we have 0, this is the last group to be read. Finally, we put this number into little-endian order, and we have the following:

10101100 00000010

Or, we would have AC 02 in hexadecimal. Now that we have serialized 300 into AC 02, and keeping in mind that deserialization is the opposite of serialization, we can deserialize that data. We take our binary representation for AC 02, drop the continuation bits (MSBs), and we reverse the order of bytes. In the end, we have the following binary:


This is the same binary we started with. It equals 300.

Now, in the real world, you might have larger numbers. For a quick reference on positive numbers, here is a list of the thresholds at which the number of bytes will increase:

Threshold value

Byte size























An astute reader might have noticed that having a varint is often beneficial, but in some cases, we might encode our values into more bytes than needed. For example, if we encode 72,057,594,037,927,936 into an int64 type, it will be serialized into 9 bytes, while with a fixed64 type, it will be encoded into 8. Furthermore, a problem coming from the encoding that we just saw is that negative numbers will be encoded into a large positive number and thus will be encoded into 9 bytes. That begs the following question: How can we efficiently choose between the different integer types?

How to choose?

The answer is, as always, it depends. However, we can be systematic in our choices to avoid many errors. We mostly have three choices that we need to make depending on the data we want to serialize:

  • The range of numbers needed
  • The need for negative numbers
  • The data distribution

The range

By now, you might have noticed that the 32 and 64 suffixes on our types are not always about the number of bits into which our data will be serialized. For varints, this is more about the range of numbers that can be serialized. These ranges are dependent on the algorithm used for serialization.

For fixed, signed, and variable-length integers, the range of numbers is the same as the one developers are used to with 32 and 64 bits. This means that we get the following:

[-2^(NUMBER_OF_BITS – 1), 2^(NUMBER_OF_BITS – 1) – 1]

Here, NUMBER_OF_BITS is either 32 or 64 depending on the type you want to use.

For unsigned numbers (uint)—this is again like what developers are expecting—we will get the following range:

[0, 2 * 2^(NUMBER_OF_BITS – 1) - 1]

The need for negative numbers

In the case where you simply do not need negative numbers (for example, for IDs), the ideal type to use is an unsigned integer (uint32, uint64). This will prevent you from encoding negative numbers, it will have twice the range in positive numbers compared to signed integers, and it will serialize using the varint algorithm.

And another type that you will potentially work with is the one for signed integers (sint32, sint64). We won’t go into details about how to serialize them, but the algorithm transforms any negative number into a positive number (ZigZag encoding) and serializes the positive number with the varint algorithm. This is more efficient for serializing negative numbers because instead of being serialized as a large positive number (9 bytes), we take advantage of the varint encoding. However, this is less efficient for serializing positive numbers because now we interleave the previously negative numbers and the positive numbers. This means that for the same positive number, we might have different amounts of encoding bytes.

The data distribution

Finally, one thing that is worth mentioning is that encoding efficiency is highly dependent on your data distribution. You might have chosen some types depending on some assumptions, but your actual data might be different. Two common examples are choosing an int32 or int64 type because we expect to have few negative values and choosing an int64 type because we expect to have few very big numbers. Both situations might result in significant inefficiencies because, in both cases, we might get a lot of values serialized into 9 bytes.

Unfortunately, there is no way of deciding the type that will always perfectly fit the data. In this kind of situation, there is nothing better than doing experiments on real data that is representative of your whole dataset. This will give you an idea of what you are doing correctly and what you are doing wrong.

Length-delimited types

Now that we’ve seen all the types for numbers, we are left with the length-delimited types. These are the types, such as string and bytes, from which we cannot know the length at compile time. Think about these as dynamic arrays.

To serialize such a dynamic structure, we simply prefix the raw data with the length of that data that is following. This means that if we have a string of length 10 and content “0123456789”, we will have the following sequence of bytes:

$ cat length-delimited.txt | protoc --encode=StringValue
  wrappers.proto | hexdump -C
00000000  0a 0a 30 31 32 33 34 35  36 37 38
  39              |..0123456789|

Here, the first 0a instance is the field tag + type, the second 0a instance is the hexadecimal representation of 10, and then we have the ASCII values for each character. To see why 0 turns into 30, you can check the ASCII manual by typing man ascii in your terminal and looking for the hexadecimal set. You should have a similar output to the following:

30  0    31  1    32  2    33  3    34  4
35  5    36  6    37  7    38  8    39  9

Here, the first number of each pair is the hexadecimal value for the second one.

Another kind of message field that will be serialized into a length-delimited type is a repeated field. A repeated field is the equivalent of a list. To write such a field, we simply add the repeated keyword before the field type. If we wanted to serialize a list of IDs, we could write the following:

repeated uint64 ids = 1;

And with this, we could store 0 or more IDs.

Similarly, these fields will be serialized with the length as a prefix. If we take the ids field and serialize the numbers from 1 to 9, we will have the following:

$ cat repeated.txt | protoc --encode=RepeatedUInt64Values
  wrappers.proto | hexdump -C
00000000  0a 09 01 02 03 04 05 06  07 08 09 |...........|

This is a list of 9 elements followed by 1, 2, … and so on.

Important note

Repeated fields are only serialized as length-delimited types when they are storing scalar types except for strings and bytes. These repeated fields are considered packed. For complex types or user-defined types (messages), the values will be encoded in a less optimal way. Each value will be encoded separately and prefixed by the type + tag byte(s) instead of having the type + tag serialized only once.

Field tags and wire types

Up until now, you read “tag + type” multiple times and we did not really see what this means. As mentioned, the first byte(s) of every serialized field will be a combination of the field type and the field tag. Let us start by seeing what a field tag is. You surely noticed something different about the syntax of a field. Each time we define a field, we add an equals sign and then an incrementing number. Here’s an example:

uint64 id = 1;

While they look like an assignment of value to the field, they are only here to give a unique identifier to the field. These identifiers, called tags, might look insignificant but they are the most important bit of information for serialization. They are used to tell Protobuf into which field to deserialize which data. As we saw earlier during the presentation of the different serialization algorithms, the field name is not serialized—only the type and the tag are. And thus, when deserialization kicks in, it will see a number and it will know where to redirect the following datum.

Now that we know that these tags are simply identifiers, let us see how these values are encoded. Tags are simply serialized as varints but they are serialized with a wire type. A wire type is a number that is given to a group of types in Protobuf. Here is the list of wire types:



Used for



int32, int64, uint32, uint64, sint32, sint64, bool, enum



fixed64, sfixed64, double



string, bytes, packed repeated fields



fixed32, sfixed32, float

Here, 0 is the type for varints, 1 is for 64-bit, and so on.

To combine the tag and the wire type, Protobuf uses a concept called bit packing. This is a technique that is designed to reduce the number of bits into which the data will be serialized. In our case here, the data is the field metadata (the famous tag + type). So, here is how it works. The last 3 bits of the serialized metadata are reserved for the wire type, and the rest is for the tag. If we take the first example that we mentioned in the Fixed-size numbers section, where we serialized 42 in a fixed32 field with tag 1, we had the following:

0d 2a 00 00 00

This time we are only interested in the 0d part. This is the metadata of the field. To see how this was serialized, let us turn 0d into binary (with 0 padding):


Here, we have 101 (5) for the wire type—this is the wire type for 32 bits—and we have 00001 (1) for tag 1. Now, since the tag is serialized as a varint, it means that we could have more than 1 byte for that metadata. Here’s a reference for knowing the thresholds at which the number of bytes will increase:

Field tag

Size (in bits)













This means that, as fields without values set to them will not be serialized, we need to keep the lowest tags to the fields that are the most often populated. This will lower the overhead needed to store the metadata. In general, 15 tags are enough, but if you come up with a situation where you need more tags, you might consider moving a group of data into a new message with lower tags.

Common types

As of now, if you checked the companion code, you could see that we are defining a lot of “boring” types that are just wrappers around one field. It is important to note that we wrote them by hand to simply give an example of how you would inspect the serialization of certain data. Most of the time, you will be able to use already defined types that are doing the same.

Well-known types

Protobuf itself comes with a bunch of already defined types. We call them well-known types. While a lot of them are rarely useful outside of the Protobuf library itself or advanced use cases, some of them are important, and we are going to use some of them in this book.

The ones that we can understand quite easily are the wrappers. We wrote some by hand earlier. They usually start with the name of the type they are wrapping and finish with Value. Here is a list of wrappers:

  • BoolValue
  • BytesValue
  • DoubleValue
  • EnumValue
  • FloatValue
  • Int32Value
  • Int64Value
  • StringValue
  • UInt32Value
  • UInt64Value

These types might be interesting for debugging use cases such as the ones we saw earlier or just to serialize simple data such as a number, a string, and so on.

Then, there are types representing time, such as Duration and Timestamp. These two types are defined in the exact same way ([Duration | Timestamp] is not proper protobuf syntax, it means that we could replace by either of both terms):

message [Duration | Timestamp] {
  // Represents seconds of UTC time since Unix epoch
  // 1970-01-01T00:00:00Z. Must be from 0001-01-
    01T00:00:00Z to
  // 9999-12-31T23:59:59Z inclusive.
  int64 seconds = 1;
  // Non-negative fractions of a second at nanosecond
  // resolution. Negative
  // second values with fractions must still have non-
  // negative nanos values
  // that count forward in time. Must be from 0 to
  // 999,999,999
  // inclusive.
  int32 nanos = 2;

However, as their name suggests, they represent different concepts. A Duration type is the difference between the start and end time, whereas a Timestamp type is a simple point in time.

Finally, one last important well-known type is FieldMask. This is a type that represents a set of fields that should be included when serializing another type. To understand this one, it might be better to give an example. Let us say that we have an API endpoint returning an account with id, username, and email. If you wanted to only get the account’s email address to prepare a list of people you want to send a promotional email to, you could use a FieldMask type to tell Protobuf to only serialize the email field. This lets us reduce the additional cost of serialization and deserialization because now we only deal with one field instead of three.

Google common types

On top of well-known types, there are types that are defined by Google. These are defined in the googleapis/api-common-protos GitHub repository under the google/type directory and are easily usable in Golang code. I encourage you to check all the types, but I want to mention some interesting ones:

  • LatLng: A latitude/longitude pair storing the values as doubles
  • Money: An amount of money with its currency as defined by ISO 4217
  • Date: Year, Month, and Day stored as int32

Once again, go to the repository to check all the others. These types are battle-tested and in a lot of cases more optimized than trivial types that we would write. However, be aware that these types might also not be a good fit for your use cases. There is no such thing as a one-size-fits-all solution.


Finally, the last construct that is important to see and that we are going to work with during this book is the service one. In Protobuf, a service is a collection of RPC endpoints that contains two major parts. The first part is the input of the RPC, and the second is the output. So, if we wanted to define a service for our accounts, we could have something like the following:

message GetAccountRequest {…}
message GetAccountResponse {…}
service AccountService {
  rpc GetAccount(GetAccountRequest) returns (GetAccountResponse);

Here, we define a message representing a request, and another one representing the response and we use these as input and output of our getAccount RPC call. In the next chapter, we are going to cover more advanced usage of the services, but right now what is important to understand is that Protobuf defines the services but does not generate the code for them. Only gRPC will.

Protobuf’s services are here to describe a contract, and it is the job of an RPC framework to fulfill that contract on the client and server part. Notice that I wrote an RPC framework and not simply gRPC. Any RPC framework could read the information provided by Protobuf’s services and generate code out of it. The goal of Protobuf here is to be independent of any language and framework. What the application does with the serialized data is not important to Protobuf.

Finally, these services are the pillars of gRPC. As we are going to see later in this book, we will use them to make requests, and we are going to implement them on the server side to return responses. Using the defined services on the client side will let us feel like we are directly calling a function on the server. If we talk about AccountService, for example, we can make a call to GetAccount by having the following code:

res := client.GetAccount(req)

Here, client is an instance of a gRPC client, req is an instance of GetAccountRequest, and res is an instance of GetAccountResponse. In this case, it feels a little bit like we are calling GetAccount, which is implemented on the server side. However, this is the doing of gRPC. It will hide all the complex ceremony of serializing and deserializing objects and sending those to the client and server.


In this chapter, we saw how to write messages and services, and we saw how scalar types are serialized and deserialized. This prepared us for the rest of the book, where we are going to use this knowledge extensively.

In the next chapter, we are going to talk about gRPC, why it uses Protobuf for serialization and deserialization, and what it is doing behind the scenes, and we are going to compare it with REST and GraphQL APIs.


  1. What is the number 32 representing in the int32 scalar type?
    1. The number of bits the serialized data will be stored in
    2. The range of numbers that can fit into the scalar type
    3. Whether the type can accept negative numbers or not
  2. What is varint encoding doing?
    1. Compressing data in such a way that a smaller number of bytes will be required for serializing data
    2. Turning every negative number into positive numbers
  3. What is ZigZag encoding doing?
    1. Compressing data in such a way that a smaller number of bytes will be required for serializing data
    2. Turning every negative number into a positive number
  4. In the following code, what is the = 1 syntax and what is it used for?
    uint64 ids = 1;
    1. This is assigning the value 1 to a field
    2. 1 is an identifier that has no other purpose than helping developers
    3. 1 is an identifier that is helping the compiler know into which field to deserialize the binary data.
  5. What is a message?
    1. An object that contains fields and represents an entity
    2. A collection of API endpoints
    3. A list of possible states
  6. What is an enum?
    1. An object that contains fields and represents an entity
    2. A collection of API endpoints
    3. A list of possible states
  7. What is a service?
    1. An object that contains fields and represents an entity
    2. A collection of API endpoints
    3. A list of possible states


  1. B
  2. A
  3. B
  4. C
  5. A
  6. C
  7. B
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Discover essential guidelines to steer clear of pitfalls when designing and evolving your gRPC services
  • Develop your understanding of advanced gRPC concepts such as authentication and security
  • Put your knowledge into action as you build, test, and deploy a TODO list microservice


In recent years, the popularity of microservice architecture has surged, bringing forth a new set of requirements. Among these, efficient communication between the different services takes center stage, and that's where gRPC shines. This book will take you through creating gRPC servers and clients in an efficient, secure, and scalable way. However, communication is just one aspect of microservices, so this book goes beyond that to show you how to deploy your application on Kubernetes and configure other tools that are needed for making your application more resilient. With these tools at your disposal, you’ll be ready to get started with using gRPC in a microservice architecture. In gRPC Go for Professionals, you'll explore core concepts such as message transmission and the role of Protobuf in serialization and deserialization. Through a step-by-step implementation of a TODO list API, you’ll see the different features of gRPC in action. You’ll then learn different approaches for testing your services and debugging your API endpoints. Finally, you’ll get to grips with deploying the application services via Docker images and Kubernetes.

What you will learn

  • Understand the different API endpoints that gRPC lets you write
  • Discover the essential considerations when writing your Protobuf files
  • Compile Protobuf code with protoc and Bazel for efficient development
  • Gain insights into how advanced gRPC concepts work
  • Grasp techniques for unit testing and load testing your API
  • Get to grips with deploying your microservices with Docker and Kubernetes
  • Discover tools for writing secure and efficient gRPC code

Product Details

Country selected

Publication date : Jul 14, 2023
Length 260 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781837638840
Category :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon AI Assistant (beta) to help accelerate your learning
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want

Product Details

Publication date : Jul 14, 2023
Length 260 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781837638840
Category :

Table of Contents

13 Chapters
Preface Chevron down icon Chevron up icon
1. Chapter 1: Networking Primer Chevron down icon Chevron up icon
2. Chapter 2: Protobuf Primer Chevron down icon Chevron up icon
3. Chapter 3: Introduction to gRPC Chevron down icon Chevron up icon
4. Chapter 4: Setting Up a Project Chevron down icon Chevron up icon
5. Chapter 5: Types of gRPC Endpoints Chevron down icon Chevron up icon
6. Chapter 6: Designing Effective APIs Chevron down icon Chevron up icon
7. Chapter 7: Out-of-the-Box Features Chevron down icon Chevron up icon
8. Chapter 8: More Essential Features Chevron down icon Chevron up icon
9. Chapter 9: Production-Grade APIs Chevron down icon Chevron up icon
10. Epilogue Chevron down icon Chevron up icon
11. Index Chevron down icon Chevron up icon
12. Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Top Reviews
No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial


How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to
  • To contact us directly if a problem is not resolved, use
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.