Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Cloud Native Programming with Golang.
Cloud Native Programming with Golang.

Cloud Native Programming with Golang.: Develop microservice-based high performance web apps for the cloud with Go

By Mina Andrawos , Martin Helmich
€28.99 €19.99
Book Dec 2017 404 pages 1st Edition
eBook
€28.99 €19.99
Print
€37.99
Subscription
€14.99 Monthly
eBook
€28.99 €19.99
Print
€37.99
Subscription
€14.99 Monthly

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Dec 28, 2017
Length 404 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781787125988
Vendor :
Google
Concepts :

Estimated delivery fee Deliver to Hungary

Premium 7 - 10 business days

€26.95
(Includes tracking information)
Table of content icon View table of contents Preview book icon Preview Book

Cloud Native Programming with Golang.

Chapter 1. Modern Microservice Architectures

In the world of computing and software, we hear about many cool new technologies and frameworks almost every week. Some of them stay and persist, whereas others fail the test of time and disappear. Needless to say, cloud computing sits very comfortably in the former category. We live in a world where cloud computing powers almost everything that needs serious backend computing power, from Internet of Things (IOT) devices that check the temperature on a refrigerator to video games that show you real-time stats for your scores compared to your peers in multiplayer games.

Cloud computing benefits huge enterprises with offices all over the world, as well as minimal start-ups of two people writing code in a coffee shop. There is tons of material that cover why cloud computing is so important for modern information technologies. For the sake of efficiency, we'll provide a straightforward answer to this question, without going into long bullet points, graphs, and lengthy paragraphs. For businesses, it's all about making money and saving costs. Cloud computing drives costs down significantly for most organizations. That's because cloud computing saves you the cost of building your own data center. No expensive hardware needs to be bought, and no expensive buildings with fancy air-conditioning systems need to be commissioned. Additionally, almost all cloud computing offerings give you the ability to pay for only what you use and no more. Cloud computing also offers massive flexibility for software engineers and IT administrators to do their jobs quickly and efficiently, thus achieving developer happiness and increased productivity.

In this chapter, we will cover the following topics:

  • Design goals of cloud-native applications, especially scalability
  • Different cloud service models
  • The twelve-factor app
  • Microservice architectures
  • Communication patterns, especially synchronous versus asynchronous communication

Why Go?


Go (or Golang) is a relatively new programming language that is taking the software development world by storm. It was developed by Google to facilitate the construction of its backend software services. However, it's now being used by numerous enterprises and start-ups to write powerful applications. What sets Go apart is the fact that it was built from the ground up to provide performance that is destined to compete with very powerful languages, such as C/C++, while supporting a relatively simple syntax that resembles dynamic languages such as JavaScript. The Go runtime offers garbage collection; however, it does not rely on virtual machines to achieve that. Go programs are compiled into native machine code. When invoking the Go compiler, you simply choose the type of platform (Windows, Mac, and so on) that you'd like the binary to run on when you build. The compiler will then produce a single binary that works on that platform. This makes Go capable of cross-compiling and producing native binaries.

Go is perfect for microservice architectures, which we will be seeing a lot of in the future. A microservice architecture is an architecture where you divide the responsibilities of your application between smaller services that only focus on specific tasks. These services can then communicate among themselves to obtain the information they need to produce results.

Go is a fresh programming language, developed in the age of cloud computing, and with modern software technologies in mind. Go is optimized for portable microservice architectures due to the fact that a Go program mostly compiles to a single binary, making the need for dependencies and virtual machines in production environments almost non-existent. Go is also a pioneer in container technologies. Docker, the top name in software containers, is written in none other than Go. Due to Go's popularity, there is work being done by major cloud providers, as well as third-party contributors, to ensure that Go gets the API support it needs for different cloud platforms.

The goal of this book is to build the knowledge bridge between the Go programming language and the cloud technologies of modern computing. In this book, you will gain practical knowledge of Go microservice architectures, message queues, containers, cloud platform Go APIs, SaaS applications design, monitoring cloud applications, and more.

Basic design goals


In order to fully benefit from the advantages of modern cloud platforms, we need to consider their characteristic properties when developing applications that should run on these platforms.

One of the main design goals of cloud applications is scalability. On the one hand, this means growing your application's resources as needed in order to efficiently serve all your users. On the other hand, it also means shrinking your resources back to an appropriate level when you do not need them anymore. This allows you to run your application in a cost-efficient manner without having to constantly overprovision for peak workloads.

In order to achieve this, typical cloud deployments often use small virtual machine instances that host an application and scale by adding (or removing) more of these instances. This method of scaling is called horizontal scaling or scale out—as opposed to vertical scaling or scale up, where you would not increase the number of instances, but provision more resources to your existing instances. Horizontal scaling is often preferred to vertical scaling for several reasons. First, horizontal scaling promises unlimited linear scalability. On the other hand, vertical scaling has its limits due to the fact that the number of resources that you can add to an existing server cannot grow infinitely. Secondly, horizontal scaling is often more cost-efficient since you can use cheap commodity hardware (or, in cloud environments, smaller instance types), whereas larger servers often grow exponentially more expensive.

Horizontal scaling versus vertical scaling; the first works by adding more instances and load-balancing the workload across them, whereas the latter works by adding more resources to existing instances

All major cloud providers offer the ability to perform horizontal scaling automatically, depending on your application's current resource utilization. This feature is called auto-scaling. Unfortunately, you do not get horizontal scalability for free. In order to be able to scale out, your application needs to adhere to some very important design goals that often need to be considered from the start, as follows:

  • Statelessness: Each instance of a cloud application should not have any kind of internal state (meaning that any kind of data is saved for later use, either in-memory or on the filesystem). In a scale-out scenario, subsequent requests might be served by another instance of the application and, for this reason, must not rely on any kind of state being present from previous requests. In order to achieve this, it is usually necessary to externalize any kind of persistent storage, such as databases and filesystems. Both database services and file storage are often offered as managed services by the cloud provider that you use in your application.

Note

Of course, this does not mean that you cannot deploy stateful applications to the cloud. They will just be considerably harder to scale out, hindering you from using cloud computing environments to their full potential.

  • Ease of deployment: When scaling out, you will need to deploy new instances of your application quickly. Creating a new instance should not require any kind of manual setup, but should be automated as much as possible (ideally completely).
  • Resiliency: In a cloud environment, especially when using auto-scaling, instances may be shut down at a moment's notice. Also, most cloud providers do not guarantee an extremely high availability on individual instances (and suggest scaling out instead, optionally across multiple availability zones). For this reason, termination and sudden death (either intentionally, in case of auto-scaling, or unintentionally, in case of failure) is something that we always need to expect in a cloud environment, and the application must handle it accordingly.

Achieving these design goals is not always easy. Cloud providers often support you in this task by offering managed services (for example, highly scalable database services of distributed file storage) that otherwise you would have to worry about yourself. Concerning your actual application, there is the twelve-factor app methodology (which we will cover in more detail in a later section), which describes a set of rules for building scalable and resilient applications.

Cloud service models


When it comes to cloud computing offerings, there are three main service models to consider for your project:

  • IaaS (Infrastructure as a Service): This is the model where the cloud service provider gives you access to infrastructure on the cloud, such as servers (virtual and bare metal), networks, firewalls, and storage devices. You use IaaS when all that you need is for the cloud provider to manage the infrastructure for you and take the hassle and the cost of maintaining it out of your hands. IaaS is used by start-ups and organizations that want full control over their application's layer. Most IaaS offerings come with a dynamic or elastic scaling option, which would scale your infrastructure based on your consumption. This, in effect, saves organizations costs since they only pay for what they use.
  • PaaS (Platform as a Service): This is the next layer up from IaaS. PaaS provides the computing platform you need to run your application. PaaS typically includes the operating systems you need to develop your applications, the databases, the web layer (if needed), and the programming language execution environment. With PaaS, you don't have to worry about updates and patches for your application environment; it gets taken care of by the cloud provider. Let's say you wrote a powerful .NET application that you want to see running in the cloud. A PaaS solution will provide the .NET environment you need to run your application, combined with the Windows server operating systems and the IIS web servers. It will also take care of load-balancing and scale for larger applications. Imagine the amount of money and effort you could save by adopting a PaaS platform instead of doing the effort in-house.
  • SaaS (Software as a Service): This is the highest layer offering you can obtain as a cloud solution. A SaaS solution is when a fully functional piece of software is delivered over the web. You access SaaS solutions from a web browser. SaaS solutions are typically used by regular users of the software, as opposed to programmers or software professionals. A very famous example of a SaaS platform is Netflix—a complex piece of software hosted in the cloud, which is available to you via the web. Another popular example is Salesforce. Salesforce solutions get delivered to customers through web browsers with speed and efficiency.

Cloud application architecture patterns


Usually, developing applications that run in a cloud environment is not that different from regular application development. However, there are a few architectural patterns that are particularly common when targeting a cloud environment, which you will learn in the following section.

The twelve-factor app

The twelve-factor app methodology is a set of rules for building scalable and resilient cloud applications. It was published by Heroku, one of the dominant PaaS providers. However, it can be applied to all kinds of cloud applications, independent of concrete infrastructure or platform providers. It is also independent of programming languages and persistence services and can equally be applied to Go programming and, for example, Node.js programming. The twelve-factor app methodology describes (unsurprisingly) twelve factors that you should consider in your application for it to be easily scalable, resilient, and platform independent. You can read up on the full description on each factor on https://12factor.net. For the purpose of this book, we will highlight some factors that we deem especially important:

  • Factor II: Dependencies—Explicitly declare and isolate dependencies: This factor deserves special mention because it is actually not as important in Go programming as in other languages. Typically, a cloud application should never rely on any required library or external tool being already present on a system. Dependencies should be explicitly declared (for example, using an npm package.json file for a Node.js application) so that a package manager can pull all these dependencies when deploying a new instance of the application. In Go, an application is typically deployed as a statically compiled binary that already contains all required libraries. However, even a Go application can be dependent on external system tools (for example, it can fork out to tools such as ImageMagick) or on existing C libraries. Ideally, you should deploy tools like these alongside your application. This is where container engines, such as Docker, shine.
  • Factor III: Config—Store config in the environment: Configuration is any kind of data that might vary for different deployment, for example, connection data and credentials for external services and databases. These kinds of data should be passed to the application via environment variables. In a Go application, retrieving these is then as easy as callingos.Getenv ("VARIABLE_NAME"). In more complex cases (for example, when you have many configuration variables), you can also resort to libraries such as github.com/tomazk/envcfg orgithub.com/caarlos0/env. For heavy lifting, you can use thegithub.com/spf13/viper library.
  • Factor IV: Backing Services—Treat backing services as attached resources: Ensure that services that your app depends on (such as databases, messaging systems, or external APIs) are easily swappable by configuration. For example, your app could accept an environment variable, such as DATABASE_URL, that might containmysql://root:root@localhost/test for a local development deployment andmysql://root:XXX@prod.XXXX.eu-central-1.rds.amazonaws.com in your production setup.
  • Factor VI: Processes—Execute the app as one or more stateless processes: Running application instances should be stateless; any kind of data that should persist beyond a single request/transaction needs to be stored in an external persistence service.One important case to keep in mind is user sessions in web applications. Often, user session data is stored in the process's memory (or is persisted to the local filesystem) in the expectancy that subsequent requests of the same user will be served by the same instance of your application. Instead, try to keep user sessions stateless or move the session state into an external data store, such as Redis or Memcached.
  • Factor IX: Disposability—Maximize robustness with fast startup and graceful shutdown: In a cloud environment, sudden termination (both intentional, for example, in case of downscaling, and unintentional, in case of failures) needs to be expected. A twelve-factor app should have fast startup times (typically in the range of a few seconds), allowing it to rapidly deploy new instances. Besides, fast startup and graceful termination is another requirement. When a server shut down, the operating system will typically tell your application to shut down by sending a SIGTERM signal that the application can catch and react to accordingly (for example, by stopping to listen on the service port, finishing requests that are currently being processed, and then exiting).
  • Factor XI: Logs—Treat logs as event streams: Log data is often useful for debugging and monitoring your application's behavior. However, a twelve-factor app should not concern itself with the routing or storage of its own log data. The easiest and simplest solution is to simply write your log stream to the process's standard output stream (for example, just using fmt.Println(...)). Streaming events to stdout allows a developer to simply watch the event stream on their console when developing the application. In production setups, you can configure the execution environment to catch the process output and send the log stream to a place where it can be processed (the possibilities here are endless—you could store them in your server's journald, send them to a syslog server, store your logs in an ELK setup, or send them to an external cloud service).

What are microservices?

When an application is maintained by many different developers over a longer period of time, it tends to get more and more complex. Bug fixes, new or changing requirements, and constant technological changes result in your software continually growing and changing. When left unchecked, this software evolution will lead to your application getting more complex and increasingly difficult to maintain.

Preventing this kind of software erosion is the objective of the microservice architecture paradigm that has emerged over the past few years. In a microservice architecture, a software system is split into a set of (potentially a lot of) independent and isolated services. These run as separate processes and communicate using network protocols (of course, each of these services should in itself be a twelve-factor app). For a more thorough introduction to the topic, we can recommend the original article on the microservice architecture by Lewis and Fowler at https://martinfowler.com/articles/microservices.html.

In contrast to traditional Service-Oriented Architectures (SOA), which have been around for quite a while, microservice architectures focus on simplicity. Complex infrastructure components such as ESBs are avoided at all costs, and instead of complicated communication protocols such as SOAP, simpler means of communication such as REST web services (about which you will learn more in Chapter 2, Building Microservices Using Rest APIs) or AMQP messaging (refer to Chapter 4, Asynchronous Microservice Architectures Using Message Queues) are preferred.

Splitting complex software into separate components has several benefits. For instance, different services can be built on different technology stacks. For one service, using Go as runtime and MongoDB as persistence layer may be the optimal choice, whereas a Node.js runtime with a MySQL persistence might be a better choice for other components. Encapsulating functionality in separate services allows developer teams to choose the right tool for the right job. Other advantages of microservices on an organizational level are that each microservice can be owned by different teams within an organization. Each team can develop, deploy, and operate their services independently, allowing them to adjust their software in a very flexible way.

Deploying microservices

With their focus on statelessness and horizontal scaling, microservices work well with modern cloud environments. Nevertheless, when choosing a microservice architecture, deploying your application will tend to get more complex overall, as you will need to deploy more, different applications (all the more reason to stick with the twelve-factor app methodology).

However, each individual service will be easier to deploy than a big monolithic application. Depending on the service's size, it will also be easier to upgrade a service to a new runtime or to replace it with a new implementation entirely. Also, you can scale each microservice individually. This allows you to scale out heavily used parts of your application while keeping less utilized components cost-efficient. Of course, this requires each service to support horizontal scaling.

Deploying microservices gets (potentially) more complex when different services use different technologies. A possible solution for this problem is offered by modern container runtimes such as Docker or RKT. Using containers, you can package an application with all its dependencies into a container image and then use that image to quickly spawn a container running your application on any server that can run Docker (or RKT) containers. (Let's return to the twelve-factor app—deploying applications in containers is one of the most thorough interpretations of dependency isolation as prescribed by Factor II.)

Running container workloads is a service offered by many major cloud providers (such as AWS'Elastic Container Service, the Azure Container Service, or the Google Container Engine). Apart from that, there are also container orchestration engines such as Docker Swarm, Kubernetesor Apache Mesosthat you can roll out on IaaS cloud platforms or your own hardware. These orchestration engines offer the possibility to distribute container workloads over entire server clusters, and offer a very high degree of automation. For example, the cluster manager will take care of deploying containers across any number of servers, automatically distributing them according to their resource requirements and usages. Many orchestration engines also offer auto-scaling features and are often tightly integrated with cloud environments.

You will learn more about deploying microservices with Docker and Kubernetes in Chapter 6, Deploying Your Application in Containers.

REST web services and asynchronous messaging

When building a microservice architecture, your individual services need to communicate with one another. One widely accepted de facto standard for microservice communication is RESTful web services (about which you will learn more in Chapter 2, Building Microservices Using Rest APIs, and Chapter 3, Securing Microservices). These are usually built on top of HTTP (although the REST architectural style itself is more or less protocol independent) and follow the client/server model with a request/reply communication model.

Synchronous versus Asynchronous communication model

This architecture is typically easy to implement and to maintain. It works well for many use cases. However, the synchronous request/reply pattern may hit its limits when you are implementing a system with complex processes that span many services. Consider the first part of the preceding diagram. Here, we have a user service that manages an application's user database. Whenever a new user is created, we will need to make sure that other services in the system are also made aware of this new user. Using RESTful HTTP, the user service needs to notify these other services by REST calls. This means that the user service needs to know all other services that are in some way affected by the user management domain. This leads to a tight coupling between the components, which is something you'd generally like to avoid.

An alternative communication pattern that can solve these issues is the publish/subscribe pattern. Here, services emit events that other services can listen on. The service emitting the event does not need to know which other services are actually listening to these events. Again, consider the second part of the preceding diagram—here, the user service publishes an event stating that a new user has just been created. Other services can now subscribe to this event and are notified whenever a new user has been created. These architectures usually require the use of a special infrastructure component: the message broker. This component accepts published messages and routes them to their subscribers (typically using a queue as intermediate storage).

The publish/subscribe pattern is a very good method to decouple services from one another—when a service publishes events, it does not need to concern itself with where they will go, and when another service subscribes to events, it also does not know where they came from. Furthermore, asynchronous architectures tend to scale better than ones with synchronous communication. Horizontal scaling and load balancing are easily accomplished by distributing messages to multiple subscribers.

Unfortunately, there is no such thing as a free lunch; this flexibility and scalability are paid for with additional complexity. Also, it becomes hard to debug single transactions across multiple services. Whether this trade-off is acceptable for you needs to be assessed on a case-by-case basis.

In Chapter 4, Asynchronous Microservice Architectures Using Message Queues, you will learn more about asynchronous communication patterns and message brokers.

The MyEvents platform


Throughout this book, we will build a useful SaaS application called MyEvents. MyEvents will utilize the technologies that you'll be learning in order to become a modern, scalable, cloud-native, and snappy application. MyEvents is an event management platform that allows users to book tickets for events all over the world. With MyEvents, you will be able to book tickets for yourself and your peers for concerts, carnivals, circuses, and more. MyEvents will keep a record of the bookings, the users, and the different locations where the events are taking place. It will manage your reservations efficiently.

We will make use of microservices, message queues, ReactJS, MongoDB, AWS, and more to construct MyEvents. In order to understand the application better, let's take a look at the logical entities that our overall application will be managing. They will be managed by multiple microservices in order to establish a clear separation of concerns and to achieve the flexibility and scalability that we need:

We will have multiple users; each User can have multiple bookings for events, and each Booking will correspond to a single Event. For each one of our events, there will be a Location where the event is taking place. Inside the Location, we will need to identify the Hall or room where the event is taking place.

Now, let's take a look at the microservice architecture and the different components that make our application:

Microservice architecture

We will use a ReactJS frontend to interface with the users of our applications. The ReactJS UI will use an API gateway (AWS  or local) to communicate with the different microservices that form the body of our application. There are two main microservices that represent the logic of MyEvents:

  • Event Service: This is the service that handles the events, their locations, and changes that happen to them
  • Booking Service: This service handles bookings made by users

All our services will be integrated using a publish/subscribe architecture based on message queues. Since we aim to provide you with practical knowledge in the world of microservices and cloud computing, we will support multiple types of message queues. We will support Kafka, RabbitMQ, and SQS from AWS.

The persistence layer will support multiple database technologies as well, in order to expose you to various practical database engines that empower your projects. We will support MongoDB, and DynamoDB</span>.

All of our services will support metrics APIs, which will allow us to monitor the statistics of our services via Prometheus.

The MyEvents platform is designed in a way that will build strong foundations of knowledge and exposure to the powerful world of microservices and cloud computing.

Summary


In this introductory chapter, you learned about the basic design principles of cloud-native application development. This includes design goals, such as supporting (horizontal) scalability and resilience, and also architectural patterns, such as the twelve-factor app and microservice architectures.

Over the course of the following chapters, you will learn to apply many of these principles while building the MyEvents application. In Chapter 2, Building Microservices Using Rest APIs, you will learn how to implement a small microservice that offers a RESTful web service using the Go programming language. In the next chapters, you will continue to extend this small application and learn how to handle the deployment and operate on of this application in various cloud environments.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Build well-designed and secure microservices. Enrich your microservices with continous integration and monitoring.
  • Containerize your application with Docker
  • Deploy your application to AWS. Learn how to utilize the powerful AWS services from within your application

Description

Awarded as one of the best books of all time by BookAuthority, Cloud Native Programming with Golang will take you on a journey into the world of microservices and cloud computing with the help of Go. Cloud computing and microservices are two very important concepts in modern software architecture. They represent key skills that ambitious software engineers need to acquire in order to design and build software applications capable of performing and scaling. Go is a modern cross-platform programming language that is very powerful yet simple; it is an excellent choice for microservices and cloud applications. Go is gaining more and more popularity, and becoming a very attractive skill. This book starts by covering the software architectural patterns of cloud applications, as well as practical concepts regarding how to scale, distribute, and deploy those applications. You will also learn how to build a JavaScript-based front-end for your application, using TypeScript and React. From there, we dive into commercial cloud offerings by covering AWS. Finally, we conclude our book by providing some overviews of other concepts and technologies that you can explore, to move from where the book leaves off.

What you will learn

Understand modern software applications architectures Build secure microservices that can effectively communicate with other services Get to know about event-driven architectures by diving into message queues such as Kafka, Rabbitmq, and AWS SQS. Understand key modern database technologies such as MongoDB, and Amazon's DynamoDB Leverage the power of containers Explore Amazon cloud services fundamentals Know how to utilize the power of the Go language to access key services in the Amazon cloud such as S3, SQS, DynamoDB and more. Build front-end applications using ReactJS with Go Implement CD for modern applications

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Dec 28, 2017
Length 404 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781787125988
Vendor :
Google
Concepts :

Estimated delivery fee Deliver to Hungary

Premium 7 - 10 business days

€26.95
(Includes tracking information)

Table of Contents

19 Chapters
Title Page Chevron down icon Chevron up icon
Credits Chevron down icon Chevron up icon
About the Authors Chevron down icon Chevron up icon
About the Reviewer Chevron down icon Chevron up icon
www.PacktPub.com Chevron down icon Chevron up icon
Customer Feedback Chevron down icon Chevron up icon
Preface Chevron down icon Chevron up icon
1. Modern Microservice Architectures Chevron down icon Chevron up icon
2. Building Microservices Using Rest APIs Chevron down icon Chevron up icon
3. Securing Microservices Chevron down icon Chevron up icon
4. Asynchronous Microservice Architectures Using Message Queues Chevron down icon Chevron up icon
5. Building a Frontend with React Chevron down icon Chevron up icon
6. Deploying Your Application in Containers Chevron down icon Chevron up icon
7. AWS I – Fundamentals, AWS SDK for Go, and EC2 Chevron down icon Chevron up icon
8. AWS II–S3, SQS, API Gateway, and DynamoDB Chevron down icon Chevron up icon
9. Continuous Delivery Chevron down icon Chevron up icon
10. Monitoring Your Application Chevron down icon Chevron up icon
11. Migration Chevron down icon Chevron up icon
12. Where to Go from Here? Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.