Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Mastering Service Mesh
Mastering Service Mesh

Mastering Service Mesh: Enhance, secure, and observe cloud-native applications with Istio, Linkerd, and Consul

By Anjali Khatri , Vikram Khatri
$48.99
Book Mar 2020 626 pages 1st Edition
eBook
$35.99 $24.99
Print
$48.99
Subscription
$15.99 Monthly
eBook
$35.99 $24.99
Print
$48.99
Subscription
$15.99 Monthly

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Black & white paperback book shipped to your address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Mar 30, 2020
Length 626 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781789615791
Vendor :
Google
Category :
Concepts :
Table of content icon View table of contents Preview book icon Preview Book

Mastering Service Mesh

Monolithic Versus Microservices

The purpose of this book is to walk you through the service mesh architecture. We will cover three main open source service mesh providers: Istio, Linkerd, and Consul. First of all, we will talk about how the evolution of technology led to Service Mesh. In this chapter, we will cover the application development journey from monolithic to microservices.

The technology landscape that fueled the growth of the monolithic framework is based on the technology stack that became available 20+ years ago. As hardware and software virtualization improved significantly, a new wave of innovation started with the adoption of microservices in 2011 by Netflix, Amazon, and other companies. This trend started by redesigning monolithic applications into small and independent microservices.

Before we get started on monolithic versus microservices, let's take a step back and review what led to where we are today before the inception of microservices. This chapter will go through the brief evolution of early computer machines, hardware virtualization, software virtualization, and transitioning from monolithic to microservices-based applications. We will try to summarize the journey from the early days to where we are today.

In this chapter, we will cover the following topics:

  • Early computer machines
  • Monolithic applications
  • Microservices applications

Early computer machines

IBM launched its first commercial computer (https://ibm.biz/Bd294n), the IBM 701, in 1953, which was the most powerful high-speed electronic calculator of that time. Further progression of the technology produced mainframes, and that revolution was started in the mid-1950s (https://ibm.biz/Bd294p).

Even before co-founding Intel in 1968 with Robert Noyce, Gordon Moore espoused his theory of Moore's Law (https://intel.ly/2IY5qLU) in 1965, which states that the number of transistors incorporated in a chip will approximately double every 24 months. Exponential growth still continues to this day, though this trend may not continue for long.

IBM created its first official VM product called VM/370 in 1972 (http://www.vm.ibm.com/history), followed by hardware virtualization on the Intel/AMD platform in 2005 and 2006. Monolithic applications were the only choice on early computing machines.

Early machines ran only one operating system. As time passed and machines grew in size, a need to run multiple operating systems by slicing the machines into smaller virtual machines led to the virtualization of hardware.

Hardware virtualization

Hardware virtualization led to the proliferation of virtual machines in data centers. Greg Kalinsky, EVP and CIO of Geico, in his keynote address to the IBM Think 2019 conference, mentioned the use of 70,000 virtual machines. The management of virtual machines required a different set of tools. In this area, VMware was very successful in the Intel market, whereas IBM's usage of the Hardware Management Console (HMC) was prolific in POWER for creating Logical Partitions (LPARs), or the PowerVM. Hardware virtualization had its own overheads, and it has been very popular for running multiple operating systems machines on the same physical machine.

Multiple monolithic applications have different OS requirements and languages, and it was possible to run the runtime on the same hardware but using multiple virtual machines. During this period of hardware virtualization, work on enterprise applications using the Service-Oriented Architecture (SOA) and the Enterprise Service Bus (ESB) started to evolve, which led to large monolithic applications.

Software virtualization

The next wave of innovation started with software virtualization with the use of containerization technology. Though not new, software virtualization started to get serious traction when it became easier to start adopting through tools. Docker was an early pioneer in this space in order to make software virtualization available to general IT professionals.

Solomon Hykes started dotCloud in 2010 and renamed it Docker in 2013. Software virtualization became possible due to advances in technology to provide namespace, filesystem, and processes isolation while still using the same kernel running in a bare-metal environment or in a virtual machine.

Software virtualization using containers provides better resource utilization compared to running multiple virtual machines. This leads to 30% to 40% effective resource utilization. Usually, a virtual machine takes seconds to minutes to initialize, whereas containerization shares the same kernel space, so the start up time is a lot quicker than it is with a virtual machine.

As a matter of fact, Google used software virtualization at a very large scale and used containerization for close to 10 years. This revealed the existence of their project, known as Borg. When Google published a research paper in 2015 in the EuroSys conference (https://goo.gl/Ez99hu) about its approach in managing data centers using containerization technology, it piqued interest among many technologists and, at the very same time, Docker exploded in popularity during 2014 and 2015, which made software virtualization simple enough to use.

One of the main benefits of software virtualization (also known as containerization) was to eliminate the dependency problem for a particular piece of software. For example, the Linux glibc is the main building block library, and there are hundreds of libraries that have dependencies on a particular version of glibc. We could build a Docker container that has a particular version of glibc, and it could run on a machine that has a later version of glibc. Normally, these kinds of deep dependencies have a very complex way of maintaining two different software stacks that have been built using different versions of glibc, but containers made this very simple. Docker is credited for making a simple user interface that made software packaging easy and accessible to developers.

Software virtualization made it possible to run different monolithic applications that can run within the same hardware (bare metal) or within the same virtual machine. This also led to the birth of smaller services (a complete business function) being packaged as independent software units. This is when the era of microservices started.

Container orchestration

It is easy to manage a few containers and their deployment. When the number of containers increases, a container orchestration platform makes deployment and management simpler and easier through declarative prescriptions. As containerization proliferated in 2015, the orchestration platform for containerization also evolved. Docker came with its own open source container orchestration platform known as Docker Swarm, which was a clustering and scheduling tool for Docker containers.

Apache Mesos, though not exactly similar to Docker Swarm, was built using the same principles as the Linux kernel. It was an abstract layer between applications and the Linux kernel. It was meant for distributed computing and acts as a cluster manager with an API for resource management and scheduling.

Kubernetes was the open source evolution of Google's Borg project, and its first version was released in 2015 through the Cloud Native Computing Foundation (https://cncf.io) as its first incubator project.

Major companies such as Google, Red Hat, Huawei, ZTE, VMware, Cisco, Docker, AWS, IBM, and Microsoft are contributing to the Kubernetes open source platform, and it has become a modern cluster manager and container orchestration platform. It's not a surprise that Kubernetes has become the de facto platform and is now used by all major cloud providers, with 125 companies working on it and more than 2,800+ contributors adding to it (https://www.stackalytics.com/cncf?module=kubernetes).

As container orchestration began to simplify cluster management, it became easy to run microservices in a distributed environment, which made microservices-based applications loosely coupled systems with horizontal scale-out possibilities.

Horizontal scale-out distributed computing is not new, with IBM's shared-nothing architecture for the Db2 database (monolithic application) being in use since 1998. What's new is the loosely coupled microservices that can run and scale out easily using a modern cluster manager.

Monolithic applications that used a three-tier architecture, such as Model, View, Controller (MVC) or SOA, were one of the architectural patterns on bare metal or virtualized machines. This type of pattern was adopted well in static data center environments where machines could be identified through IP addresses, and the changes were managed through DNS. This started to change with the use of distributed applications that could run on any machine (which meant the IP address could change) in the case of failures. This shift slowly started from a static data center approach to a dynamic data center approach, where identification is now done through the name of the microservice and not the IP address of the machine or container pod where the workload runs.

This fundamental shift from static to dynamic infrastructure is the basis for the evolution from monolithic to a microservices architecture. Monolithic applications are tightly coupled and have a single code base that is released in one instance for the entire application stack. Changing a single component without affecting others is a very difficult process, but it provides simplicity. On the other hand, microservices applications are loosely coupled and multiple code bases can be released independently of each other. Changing a single component is easy, but it does not provide simplicity, as was the case with monolithic applications.

We will cover a brief history of monolithic and microservices applications in the next section in order to develop a context. This will help us transition to the specific goals of this book.

Monolithic applications

The application evolution journey from monolithic to microservices can be seen in the following diagram:

Monolithic applications were created from small applications and then built up to create a tiered architecture that separated the frontend from the backend, and the backend from the data sources. In this architecture, the frontend manages user interaction, the middle tier manages the business logic, and the backend manages data access. This can be seen in the following diagram:

In the preceding diagram, the middle tier, also known as the business logic, is tightly bound to the frontend and the backend. This is a one-dimensional monolithic experience where all the tiers are in one straight line.

The three-tier modular architecture of the client-server, consisting of a frontend tier, an application tier, and a database tier, is almost 20+ years old now. It served its purpose of allowing people to build complex enterprise applications with known limitations regarding complexity, software upgrades, and zero downtime.

A large development team commits its code to a source code repository such as GitHub. The deployment process from code commits to production used to be manual before the CICD pipeline came into existence. The releases needed to be manually tested, although there were some automated test cases. Organizations used to declare a code freeze while moving the code into production. The application became overly large, complex, and very difficult to maintain in the long term. When the original code developers were no longer available, it became very difficult and time-consuming to add enhancements.

To overcome the aforementioned limitations, the concept of SOA started to evolve around 2002 onward and the Enterprise Service Bus (ESB) evolved to establish a communication link between different applications in SOA.

Brief history of SOA and ESB

The one-dimensional model of the three-tier architecture was split into a multi-dimensional SOA, where inter-service communication was enabled through ESB using the Simple Object Access Protocol (SOAP) and other web services standards.

SOA, along with ESB, could be used to break down a large three-tier application into services, where applications were built using these reusable services. The services could be dynamically discovered using service metadata through a metadata repository. With SOA, each functionality is built as a coarse-grained service that's often deployed inside an application server.

Multiple services need to be integrated to create composite services that are exposed through the ESB layer, which becomes a centralized bus for communication. This can be seen in the following diagram:

The preceding diagram shows the consumer and provider model connected through the ESB. The ESB also contains significant business logic, making it a monolithic entity where the same runtime is shared by developers in order to develop or deploy their service integrations.

In the next section, we'll talk about API gateways. The concept of the API gateway evolved around 2008 with the advent of smartphones, which provide rich client applications that need easy and secure connectivity to the backend services.

API Gateway

The SOA/web services were not ideal for exposing business functionality as APIs. This was due to the complex nature of web service-related technologies in which SOAP is used as a message format for service-to-service communication. SOAP was also used for securing web services and service-to-service communication, as well as for defining service discovery metadata. SOAP lacked a self-service model, which hindered the development of an ecosystem around it.

We use application programming interface (API), as a term, to expose a service over REST (HTTP/JSON) or a web service (SOAP/HTTP). An API gateway was typically built on top of existing SOA/ESB implementations for APIs that could be used to expose business functionality securely as a managed service. This can be seen in the following diagram:

In the preceding diagram, the API gateway is used to expose the three-tier and SOA/ESB-based services in which the business logic contained in the ESB still hinders the development of the independent services.

With containerization availability, the new paradigm of microservices started to evolve from the SOA/ESB architecture in 2012 and seriously took off in 2015.

Drawbacks of monolithic applications

Monolithic applications are simple to develop, deploy, and scale as long as they are small in nature.

As the size and complexity of monoliths grow, various disadvantages arise, such as the following:

  • Development is slow.
  • Large monolithic code bases intimidate new developers.
  • The application is difficult to understand and modify.
  • Software releases are painful and occur infrequently.
  • Overloaded IDE, web container.
  • Continuous deployment is difficult Code Freeze period to deploy.
  • Scaling the application can be difficult due to an increase in data volume.
  • Scaling development can be difficult.
  • Requires long-term commitment to a technology stack.
  • Lack of reliability due to difficulty in testing the application thoroughly.

Enterprise application development is coordinated among many smaller teams that can work independently of each other. As an application grows in size, the aforementioned complexities lead to them looking for better approaches, resulting in the adoption of microservices.

Microservices applications

A very small number of developers recognized the need for new thinking very early on and started working on the evolution of a new architecture, called microservices, early in 2014.

Early pioneers

A few individuals took a forward leap in moving away from monolithic to small manageable services adoption in their respective companies. Some of the most notable of these people include Jeff Bezos, Amazon's CEO, who famously implemented a mandate for Amazon (https://bit.ly/2Hb3NI5) in 2002. It stated that all employees have to adopt a service interface methodology where all communication calls would happen over the network. This daring initiative replaced the monolith with a collection of loosely coupled services. One nugget of wisdom from Jeff Bezos was two-pizza teams individual teams shouldn't be larger than what two pizzas can feed. This colloquial wisdom is at the heart of shorter development cycles, increased deployment frequency, and faster time to market.

Netflix adopted microservices early on. It's important to mention Netflix's Open Source Software Center (OSS) contribution through https://netflix.github.io. Netflix also created a suite of automated open source tools, the Simian Army (https://github.com/Netflix/SimianArmy), to stress-test its massive cloud infrastructure. The rate at which Netflix has adopted new technologies and implemented them is phenomenal.

Lyft adopted microservices and created an open source distributed proxy known as Envoy (https://www.envoyproxy.io/) for services and applications, and would later go on to become a core part of one of the most popular service mesh implementations, such as Istio and Consul.

Though this book is not about developing microservices applications, we will briefly discuss the microservices architecture so that it is relevant from the perspective of a service mesh.

Since early 2000, when machines were still used as bare metal, three-tier monolithic applications ran on more than one machine, leading to the concept of distributed computing that was very tightly coupled. Bare metal evolved into VMs and monolithic applications into SOA/ESB with an API gateway. This trend continued until 2015 when the advent of containers disrupted the SOA/ESB way of thinking toward a self-contained, independently managed service. Due to this, the term microservice was coined.

The first mention of microservice as a term was used in a workshop of software architects in 2011 (https://bit.ly/1KljYiZ) when they used the term microservice to describe a common architectural style as a fine-grained SOA.

Chris Richardson created https://microservices.io in January 2014 to document architecture and design patterns.

James Lewis and Martin Fowler published their blog post (https://martinfowler.com/articles/microservices.html) about microservices in March 2014, and this blog post popularized the term microservices.

The microservices boom started with easy containerization that was made possible by Docker and through a de facto container orchestration platform known as Kubernetes, which was created for distributed computing.

What is a microservice?

The natural transition of SOA/ESB is toward microservices, in which services are decoupled from a monolithic ESB. Let's go over the core points of microservices:

  • Each service is autonomous, which is developed and deployed independently.
  • Each microservice can be scaled independently in relation to others if it receives more traffic without having to scale other microservices.
  • Each microservice is designed based on the business capabilities at hand so that each service serves a specific business goal with a simple time principle that it does only one thing, and does it well.
  • Since services do not share the same execution runtime, each microservice can be developed in different languages or in a polyglot fashion, providing agility in which developers pick the best programming language to develop their own service.
  • The microservices architecture eliminated the need for a centralized ESB. The business logic, including inter-service communication, is done through smart endpoints and dumb pipes. This means that the centralized business logic of ESBs is now distributed among the microservices through smart endpoints, and a primitive messaging system or a dumb pipe is used for service-to-service communication using a lightweight protocol such as REST or gRPC.

The evolution of SOA/ESB to the microservices pattern was mainly influenced by the idea of being able to adapt to smaller teams that are independent of each other and to provide a self-service model for the consumption of services that were created by smaller teams. At the time of writing, microservices is a winning pattern that is being adopted by many enterprises to modernize their existing monolithic application stack.

Evolution of microservices

The following diagram shows the evolution of the application architecture from a three-tier architecture to SOA/ESB and then to microservices in terms of flexibility toward scalability and decoupling:

Microservices have evolved from being tiered and the SOA architecture and are becoming the accepted pattern for building modern applications. This is due to the following reasons:

  • Extreme scalability
  • Extreme decoupling
  • Extreme agility

These are key points regarding the design of a distributed scalable application where developers can pick the best programming language of their choice to develop their own service.

A major differentiation between monolithic and microservices is that, with microservices, the services are loosely coupled, and they communicate using dumb pipe or low-level REST or gRPC protocols. One way to achieve loose coupling is through the use of a separate data store for each service. This helps services isolate themselves from each other since a particular service is not blocked due to another service holding a data lock. Separate data stores allow the microservices to scale up and down, along with their data stores, independently of all the other services.

It is also important to point out the early pioneers in microservices, which we will discuss in the next section.

Microservices architecture

The aim of a microservice architecture is to completely decouple app components from one another so that they can be maintained, scaled, and more. It's an evolution of the app architecture, SOA, and publishing APIs:

  • SOA: Focuses on reuse, technical integration issues, and technical APIs
  • Microservices: Focus on functional decomposition, business capabilities, and business APIs

In Martin Fowler's paper, he states that the microservice architecture would have been better named the micro-component architecture because it is really about breaking apps up into smaller pieces (micro-components). For more information, see Microservices, by Martin Fowler, at https://martinfowler.com/articles/microservices.html. Also, check out Kim Clark's IBM blog post on microservices at https://developer.ibm.com/integration/blog/2017/02/09/microservices-vs-soa, where he argues microservices as micro-components.

The following diagram shows the microservice architecture in which different clients consume the same services. Each service can use the same/different language and can be deployed/scaled independently of each other:

Each microservice runs its own process. Services are optimized for a single function and they must have one, and only one, reason to change. The communication between services is done through REST APIs and message brokers. The CICD is defined per service. The services evolve at a different pace. The scaling policy for each service can be different.

Benefits and drawbacks of microservices

The explosion of microservices is not an accident, and it is mainly due to rapid development and scalability:

  • Rapid development: Develop and deploy a single service independently. Focus only on the interface and the functionality of the service and not the functionality of the entire system.
  • Scalability: Scale a service independently without affecting others. This is simple and easy to do in a Kubernetes environment.

The other benefits of microservices are as follows:

  • Each service can use a different language (better polyglot adaptability).
  • Services are developed on their own timetables so that the new versions are delivered independently of other services.
  • The development of microservices is suited for cross-functional teams.
  • Improved fault isolation.
  • Eliminates any long-term commitment to a technology stack.

However, the microservice is not a panacea and comes with drawbacks:

  • The complexity of a distributed system.
  • Increased resource consumption.
  • Inter-service communication.
  • Testing dependencies in a microservices-based application without a tool can be very cumbersome.
  • When a service fails, it becomes very difficult to identify the cause of a failure.
  • A microservice can't fetch data from other services through simple queries. Instead, it must implement queries using APIs.
  • Microservices lead to more Ops (operations) overheads.

There is no perfect silver bullet, and technology continues to emerge and evolve. Next, we'll discuss the future of microservices.

Future of microservices

Microservices can be deployed in a distributed environment using a container orchestration platform such as Kubernetes, Docker Swarm, or an on-premises Platform as a Service (PaaS), such as Pivotal Cloud Foundry or Red Hat OpenShift.

Service mesh helps reduce/overcome the aforementioned challenges and overheads on Ops, such as the operations overhead for manageability, serviceability, metering, and testing. This can be made simple by the use of service mesh providers such as Istio, Linkerd, or Consul.

As with every technology, there is no perfect solution, and each technology has its own benefits and drawbacks regarding an individual's perception and bias toward a particular technology. Sometimes, the drawbacks of a particular technology outweigh the benefits they accrue.

In the last 20 years, we have seen the evolution of monolithic applications to three-tier ones, to the adoption of the SOA/ESB architecture, and then the transition to microservices. We are already witnessing a framework evolution around microservices using service mesh, which is what this book is based on.

Summary

In this chapter, we gleaned over the evolution of computers and running multiple virtual machines on a single computer, which was possible through hardware virtualization. We learned about the tiered application journey that started 20+ years ago on bare metal machines. We witnessed the transition of three-tiered applications to the SOA/ESB architecture. The evolution of software virtualization drove the explosion of containerization, which led to the evolution of the SOA/ESB architecture to microservices. Then, we learned about the benefits and drawbacks of microservices. You can apply this knowledge of microservices to drive a business's need for rapid development and scalability to achieve time-to-market goals.

In the next chapter, we will move on to cloud-native applications and understand what is driving the motivation of various enterprises to move from monolithic to cloud-native applications. The purpose of this book is to go into the details of the service mesh architecture, and this can't be done without learning about the cloud-native architecture.

Questions

  1. Microservices applications are difficult to test.

A) True
B) False

  1. Monolithic/microservices applications are related to dynamic infrastructures.

A) True
B) False

  1. Monolithic applications are best if they are small in size.

A) True
B) False

  1. When a microservice fails, debugging becomes very difficult.

A) True
B) False

  1. Large monolithic applications are very difficult to maintain and patch in the long term.

A) True
B) False

Further reading

  • Microservices Patterns, Richardson, Chris (2018). Shelter Island, NY: Manning
  • Microservices Resource Guide, Fowler, M. (2019), martinfowler.com. Available at https://martinfowler.com/microservices, accessed March 3, 2019
  • Microservices for the Enterprise, Indrasiri., K., and Siriwardena, P. (2018). [S.l.]: Apress.
  • From Monolithic Three-tiers Architectures to SOA versus Microservices, Maresca, P. (2015), TheTechSolo, available at https://bit.ly/2GYhYk,accessed March 3, 2019
  • Retire the Three-Tier Application Architecture to Move Toward Digital Business, Thomas, A., and Gupta, A. (2016), Gartner.com, available at https://gtnr.it/2Fl787w, accessed March 3, 2019
  • Microservices Lead the New Class of Performance Management Solutions, LightStep. (2019), available at https://lightstep.com/blog/microservices-trends-report-2018, accessed March 3, 2019
  • What year did Bezos issue the API Mandate at Amazon?, Schroeder, G. (2016), available at https://bit.ly/2Hb3NI5, accessed March 3, 2019
  • Kubernetes Components, Kubernetes.io. (2019), available at https://bit.ly/2JyhIGt, accessed March 3, 2019
  • Microservices implementation Netflix stack – Tharanga Thennakoon – Medium, Thennakoon, T. (2017), available at https://bit.ly/2NCDzPZ, accessed March 3, 2019
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Manage your cloud-native applications easily using service mesh architecture
  • Learn about Istio, Linkerd, and Consul – the three primary open source service mesh providers
  • Explore tips, techniques, and best practices for building secure, high-performance microservices

Description

Although microservices-based applications support DevOps and continuous delivery, they can also add to the complexity of testing and observability. The implementation of a service mesh architecture, however, allows you to secure, manage, and scale your microservices more efficiently. With the help of practical examples, this book demonstrates how to install, configure, and deploy an efficient service mesh for microservices in a Kubernetes environment. You'll get started with a hands-on introduction to the concepts of cloud-native application management and service mesh architecture, before learning how to build your own Kubernetes environment. While exploring later chapters, you'll get to grips with the three major service mesh providers: Istio, Linkerd, and Consul. You'll be able to identify their specific functionalities, from traffic management, security, and certificate authority through to sidecar injections and observability. By the end of this book, you will have developed the skills you need to effectively manage modern microservices-based applications.

What you will learn

Compare the functionalities of Istio, Linkerd, and Consul Become well-versed with service mesh control and data plane concepts Understand service mesh architecture with the help of hands-on examples Work through hands-on exercises in traffic management, security, policy, and observability Set up secure communication for microservices using a service mesh Explore service mesh features such as traffic management, service discovery, and resiliency

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Black & white paperback book shipped to your address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Mar 30, 2020
Length 626 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781789615791
Vendor :
Google
Category :
Concepts :

Table of Contents

31 Chapters
Preface Chevron down icon Chevron up icon
Section 1: Cloud-Native Application Management Chevron down icon Chevron up icon
Monolithic Versus Microservices Chevron down icon Chevron up icon
Cloud-Native Applications Chevron down icon Chevron up icon
Section 2: Architecture Chevron down icon Chevron up icon
Service Mesh Architecture Chevron down icon Chevron up icon
Service Mesh Providers Chevron down icon Chevron up icon
Service Mesh Interface and SPIFFE Chevron down icon Chevron up icon
Section 3: Building a Kubernetes Environment Chevron down icon Chevron up icon
Building Your Own Kubernetes Environment Chevron down icon Chevron up icon
Section 4: Learning about Istio through Examples Chevron down icon Chevron up icon
Understanding the Istio Service Mesh Chevron down icon Chevron up icon
Installing a Demo Application Chevron down icon Chevron up icon
Installing Istio Chevron down icon Chevron up icon
Exploring Istio Traffic Management Capabilities Chevron down icon Chevron up icon
Exploring Istio Security Features Chevron down icon Chevron up icon
Enabling Istio Policy Controls Chevron down icon Chevron up icon
Exploring Istio Telemetry Features Chevron down icon Chevron up icon
Section 5: Learning about Linkerd through Examples Chevron down icon Chevron up icon
Understanding the Linkerd Service Mesh Chevron down icon Chevron up icon
Installing Linkerd Chevron down icon Chevron up icon
Exploring the Reliability Features of Linkerd Chevron down icon Chevron up icon
Exploring the Security Features of Linkerd Chevron down icon Chevron up icon
Exploring the Observability Features of Linkerd Chevron down icon Chevron up icon
Section 6: Learning about Consul through Examples Chevron down icon Chevron up icon
Understanding the Consul Service Mesh Chevron down icon Chevron up icon
Installing Consul Chevron down icon Chevron up icon
Exploring the Service Discovery Features of Consul Chevron down icon Chevron up icon
Exploring Traffic Management in Consul Chevron down icon Chevron up icon
Assessment Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela