Search icon CANCEL
Cart icon
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Learning Hub
Free Learning
Arrow right icon
Exploring GPT-3
Exploring GPT-3

Exploring GPT-3: An unofficial first look at the general-purpose language processing API from OpenAI

By Steve Tingiris
€30.99 €20.99
Book Aug 2021 296 pages 1st Edition
€30.99 €20.99
€14.99 Monthly
€30.99 €20.99
€14.99 Monthly

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now
Table of content icon View table of contents Preview book icon Preview Book

Exploring GPT-3

Chapter 1: Introducing GPT-3 and the OpenAI API

The buzz about Generative Pre-trained Transformer Version 3 (GPT-3) started with a blog post from a leading Artificial Intelligence (AI) research lab, OpenAI, on June 11, 2020. The post began as follows:

We're releasing an API for accessing new AI models developed by OpenAI. Unlike most AI systems which are designed for one use-case, the API today provides a general-purpose "text in, text out" interface, allowing users to try it on virtually any English language task.

Online demos from early beta testers soon followed—some seemed too good to be true. GPT-3 was writing articles, penning poetry, answering questions, chatting with lifelike responses, translating text from one language to another, summarizing complex documents, and even writing code. The demos were incredibly impressive—things we hadn't seen a general-purpose AI system do before—but equally impressive was that many of the demos were created by people with a limited or no formal background in AI and Machine Learning (ML). GPT-3 had raised the bar, not just in terms of the technology, but also in terms of AI accessibility.

GPT-3 is a general-purpose language processing AI model that practically anybody can understand and start using in a matter of minutes. You don't need a Doctor of Philosophy (PhD) in computer science—you don't even need to know how to write code. In fact, everything you'll need to get started is right here in this book. We'll begin in this chapter with the following topics:

  • Introduction to GPT-3
  • Democratizing NLP
  • Understanding prompts, completions, and tokens
  • Introducing Davinci, Babbage, Curie, and Ada
  • Understanding GPT-3 risks

Technical requirements

This chapter requires you to have access to the OpenAI Application Programming Interface (API). You can register for API access by visiting

Introduction to GPT-3

In short, GPT-3 is a language model: a statistical model that calculates the probability distribution over a sequence of words. In other words, GPT-3 is a system for guessing which text comes next when text is given as an input.

Now, before we delve further into what GPT-3 is, let's cover a brief introduction (or refresher) on Natural Language Processing (NLP).

Simplifying NLP

NLP is a branch of AI that focuses on the use of natural human language for various computing applications. NLP is a broad category that encompasses many different types of language processing tasks, including sentiment analysis, speech recognition, machine translation, text generation, and text summarization, to name but a few.

In NLP, language models are used to calculate the probability distribution over a sequence of words. Language models are essential because of the extremely complex and nuanced nature of human languages. For example, pay in full and painful or tee time and teatime sound alike but have very different meanings. A phrase such as she's on fire could be literal or figurative, and words such as big and large can be used interchangeably in some cases but not in others—for example, using the word big to refer to an older sibling wouldn't have the same meaning as using the word large. Thus, language models are used to deal with this complexity, but that's easier said than done.

While understanding things such as word meanings and their appropriate usage seems trivial to humans, NLP tasks can be challenging for machines. This is especially true for more complex language processing tasks such as recognizing irony or sarcasm—tasks that even challenge humans at times.

Today, the best technical approach to a given NLP task depends on the task. So, most of the best-performing, state-of-the-art (SOTA) NLP systems are specialized systems that have been fine-tuned for a single purpose or a narrow range of tasks. Ideally, however, a single system could successfully handle any NLP task. That's the goal of GPT-3: to provide a general-purpose AI system for NLP. So, even though the best-performing NLP systems today tend to be specialized, purpose-built systems, GPT-3 achieves SOTA performance on a number of common NLP tasks, showing the potential for a future general-purpose NLP system that could provide SOTA performance for any NLP task.

What exactly is GPT-3?

Although GPT-3 is a general-purpose NLP system, it really just does one thing: it predicts what comes next based on the text that is provided as input. But it turns out that, with the right architecture and enough data, this one thing can handle a stunning array of language processing tasks.

GPT-3 is the third version of the GPT language model from OpenAI. So, although it started to become popular in the summer of 2020, the first version of GPT was announced 2 years earlier, and the following version, GPT-2, was announced in February 2019. But even though GPT-3 is the third version, the general system design and architecture hasn't changed much from GPT-2. There is one big difference, however, and that's the size of the dataset that was used for training.

GPT-3 was trained with a massive dataset comprised of text from the internet, books, and other sources, containing roughly 57 billion words and 175 billion parameters. That's 10 times larger than GPT-2 and the next-largest language model. To put the model size into perspective, the average human might read, write, speak, and hear upward of a billion words in an entire lifetime. So, GPT-3 has been trained on an estimated 57 times the number of words most humans will ever process.

The GPT-3 language model is massive, so it isn't something you'll be downloading and dabbling with on your laptop. But even if you could (which you can't because it's not available to download), it would cost millions of dollars in computing resources each time you wanted to build the model. This would put GPT-3 out of reach for most small companies and virtually all individuals if you had to rely on your own computer resource to use it. Thankfully, you don't. OpenAI makes GPT-3 available through an API that is both affordable and easy to use. So, anyone can use some of the most advanced AI ever created!

Democratizing NLP

Anyone can use GPT-3 with access to the OpenAI API. The API is a general-purpose text in, text out interface that could be used for virtually any language task. To use the API, you simply pass in text and get a text response back. The task might be to do sentiment analysis, write an article, answer a question, or summarize a document. It doesn't matter, as far as the API is concerned—it's all done the same way, which makes using the API easy enough for just about anyone to use, even non-programmers.

The text you pass in is referred to as a prompt, and the returned text is called a completion. A prompt is used by GPT-3 to determine how best to complete the task. In the simplest case, a prompt can provide a few words to get started with. For example, if the prompt was If today is Monday, tomorrow is, GPT-3 would likely respond with Tuesday, along with some additional text such as If today is Tuesday, tomorrow is Wednesday, and so on. This means that what you get out of GPT-3 depends on what you send to it.

As you might guess, the quality of a completion depends heavily on the prompt. GPT-3 uses all of the text in a prompt to help generate the most relevant completion. Each and every word, along with how the prompt is structured, helps improve the language model prediction results. So, understanding how to write and test prompts is the key to unlocking GPT-3's true potential.

Understanding prompts, completions, and tokens

Literally any text can be used as a prompt—send some text in and get some text back. However, as entertaining as it can be to see what GPT-3 does with random strings, the real power comes from understanding how to write effective prompts.


Prompts are how you get GPT-3 to do what you want. It's like programming, but with plain English. So, you have to know what you're trying to accomplish, but rather than writing code, you use words and plain text.

When you're writing prompts, the main thing to keep in mind is that GPT-3 is trying to figure out which text should come next, so including things such as instructions and examples provides context that helps the model figure out the best possible completion. Also, quality matters— for example, spelling, unclear text, and the number of examples provided will have an effect on the quality of the completion.

Another key consideration is the prompt size. While a prompt can be any text, the prompt and the resulting completion must add up to fewer than 2,048 tokens. We'll discuss tokens a bit later in this chapter, but that's roughly 1,500 words.

So, a prompt can be any text, and there aren't hard and fast rules that must be followed like there are when you're writing code. However, there are some guidelines for structuring your prompt text that can be helpful in getting the best results.

Different kinds of prompts

We'll dive deep into prompt writing throughout this book, but let's start with the different prompt types. These are outlined as follows:

  • Zero-shot prompts
  • One-shot prompts
  • Few-shot prompts

Zero-shot prompts

A zero-shot prompt is the simplest type of prompt. It only provides a description of a task, or some text for GPT-3 to get started with. Again, it could literally be anything: a question, the start of a story, instructions—anything, but the clearer your prompt text is, the easier it will be for GPT-3 to understand what should come next. Here is an example of a zero-shot prompt for generating an email message. The completion will pick up where the prompt ends—in this case, after Subject::

Write an email to my friend Jay from me Steve thanking him for covering my shift this past Friday. Tell him to let me know if I can ever return the favor.

The following screenshot is taken from a web-based testing tool called the Playground. We'll discuss the Playground more in Chapter 2, GPT-3 Applications and Use Cases, and Chapter 3, Working with the OpenAI Playground, but for now we'll just use it to show the completion generated by GPT-3 as a result of the preceding prompt. Note that the original prompt text is bold, and the completion shows as regular text:

Figure 1.1 – Zero-shot prompt example

Figure 1.1 – Zero-shot prompt example

So, a zero-shot prompt is just a few words or a short description of a task without any examples. Sometimes this is all GPT-3 needs to complete the task. Other times, you may need to include one or more examples. A prompt that provides a single example is referred to as a one-shot prompt.

One-shot prompts

A one-shot prompt provides one example that GPT-3 can use to learn how to best complete a task. Here is an example of a one-shot prompt that provides a task description (the first line) and a single example (the second line):

A list of actors in the movie Star Wars 
1. Mark Hamill: Luke Skywalker

From just the description and the one example, GPT-3 learns what the task is and that it should be completed. In this example, the task is to create a list of actors from the movie Star Wars. The following screenshot shows the completion generated from this prompt:

Figure 1.2 – One-shot prompt example

Figure 1.2 – One-shot prompt example

The one-shot prompt works great for lists and commonly understood patterns. But sometimes you'll need more than one example. When that's the case you'll use a few-shot prompt.

Few-shot prompts

A few-shot prompt provides multiple examples—typically, 10 to 100. Multiple examples can be useful for showing a pattern that GPT-3 should continue. Few-shot prompts and more examples will likely increase the quality of the completion because the prompt provides more for GPT-3 to learn from.

Here is an example of a few-shot prompt to generate a simulated conversation. Notice that the examples provide a back-and-forth dialog, with things that might be said in a conversation:

This is a conversation between Steve, the author of the book Exploring GPT-3 and someone who is reading the book.
Reader: Why did you decide to write the book?
Steve: Because I'm super fascinated by GPT-3 and emerging technology in general.
Reader: What will I learn from this book?
Steve: The book provides an introduction to GPT-3 from OpenAI. You'll learn what GPT-3 is and how to get started using it.
Reader: Do I need to be a coder to follow along?
Steve: No. Even if you've never written a line of code before, you'll be able to follow along just fine.

In the following screenshot, you can see that GPT-3 continues the simulated conversation that was started in the examples provided in the prompt:

Figure 1.3 – Few-shot prompt example

Figure 1.3 – Few-shot prompt example

Now that you understand the different prompt types, let's take a look at some prompt examples.

Prompt examples

The OpenAI API can handle a variety of tasks. The possibilities range from generating original stories to performing complex text analysis, and everything in between. To get familiar with the kinds of tasks GPT-3 can perform, OpenAI provides a number of prompt examples. You can find example prompts in the Playground and in the OpenAI documentation.

In the Playground, the examples are referred to as presets. Again, we'll cover the Playground in detail in Chapter 3, Working with the OpenAI Playground, but the following screenshot shows some of the presets that are available:

Figure 1.4 – Presets

Figure 1.4 – Presets

Example prompts are also available in the OpenAI documentation. The OpenAI documentation is excellent and includes a number of great prompt examples, with links to open and test them in the Playground. The following screenshot shows an example prompt from the OpenAI documentation. Notice the Open this example in Playground link below the prompt example. You can use that link to open the prompt in the Playground:

Figure 1.5 – OpenAI documentation provides prompt examples

Figure 1.5 – OpenAI documentation provides prompt examples

Now that you have an understanding of prompts, let's talk about how GPT-3 uses them to generate a completion.


Again, a completion refers to the text that is generated and returned as a result of the provided prompt/input. You'll also recall that GPT-3 was not specifically trained to perform any one type of NLP task—it's a general-purpose language processing system. However, GPT-3 can be shown how to complete a given task using a prompt. This is called meta-learning.


With most NLP systems, the data used to teach the system how to complete a task is provided when the underlying ML model is trained. So, to improve results for a given task, the underlying training must be updated, and a new version of the model must be built. GPT-3 works differently, as it isn't trained for any specific task. Rather, it was designed to recognize patterns in the prompt text and to continue the pattern(s) by using the underlying general-purpose model. This approach is referred to as meta-learning because the prompt is used to teach GPT-3 how to generate the best possible completion, without the need for retraining. So, in effect, the different prompt types (zero-shot, one-shot, and few-shot) can be used to program GPT-3 for different types of tasks, and you can provide a lot of instructions in the prompt—up to 2,048 tokens. Alright—now is a good time to talk about tokens.


When a prompt is sent to GPT-3, it's broken down into tokens. Tokens are numeric representations of words or—more often—parts of words. Numbers are used for tokens rather than words or sentences because they can be processed more efficiently. This enables GPT-3 to work with relatively large amounts of text. That said, as you've learned, there is still a limit of 2,048 tokens (approximately ~1,500 words) for the combined prompt and the resulting generated completion.

You can stay under the token limit by estimating the number of tokens that will be used in your prompt and resulting completion. On average, for English words, every four characters represent one token. So, just add the number of characters in your prompt to the response length and divide the sum by four. This will give you a general idea of the tokens required. This is helpful if you're trying to get an idea of how many tokens are required for a number of tasks.

Another way to get the token count is with the token count indicator in the Playground. This is located just under the large text input, on the bottom right. The magnified area in the following screenshot shows the token count. If you hover your mouse over the number, you'll also see the total count with the completion. For our example, the prompt Do or do not. There is no try.—the wise words from Master Yoda—uses 10 tokens and 74 tokens with the completion:

Figure 1.6 – Token count

Figure 1.6 – Token count

While understanding tokens is important for staying under the 2,048 token limit, they are also important to understand because tokens are what OpenAI uses as the basis for usage fees. Overall token usage reporting is available for your account at The following screenshot shows an example usage report. We'll discuss this more in Chapter 3, Working with the OpenAI Playground:

Figure 1.7 – Usage statistics

Figure 1.7 – Usage statistics

In addition to token usage, the other thing that affects the costs associated with using GPT-3 is the engine you choose to process your prompts. The engine refers to the language model that will be used. The main difference between the engines is the size of the associated model. Larger models can complete more complex tasks, but smaller models are more efficient. So, depending on the task complexity, you can significantly reduce costs by using a smaller model. The following screenshot shows the model pricing at the time of publishing. As you can see, the cost differences can be significant:

Figure 1.8 – Model pricing

Figure 1.8 – Model pricing

So, the engines or models each has a different cost but the one you'll need depends on the task you're performing. Let's look at the different engine options next.

Introducing Davinci, Babbage, Curie, and Ada

The massive dataset that is used for training GPT-3 is the primary reason why it's so powerful. However, bigger is only better when it's necessary—and more power comes at a cost. For those reasons, OpenAI provides multiple models to choose from. Today there are four primary models available, along with a model for content filtering and instruct models.

The available models or engines (as they're also referred to) are named Davinci, Babbage, Curie, and Ada. Of the four, Davinci is the largest and most capable. Davinci can perform any tasks that any other engine can perform. Babbage is the next most capable engine, which can do anything that Curie or Ada can do. Ada is the least capable engine, but the best-performing and lowest-cost engine.

When you're getting started and for initially testing new prompts, you'll usually want to begin with Davinci , then try, Ada, Babbage, or Curie to see if one of them can complete the task faster or more cost-effectively. The following is an overview of each engine and the types of tasks that might be best suited for each. However, keep in mind that you'll want to test. Even though the smaller engines might not be trained with as much data, they are all still general-purpose models.


Davinci is the most capable model and can do anything that any other model can do, and much more—often with fewer instructions. Davinci is able to solve logic problems, determine cause and effect, understand the intent of text, produce creative content, explain character motives, and handle complex summarization tasks.


Curie tries to balance power and speed. It can do anything that Ada or Babbage can do but it's also capable of handling more complex classification tasks and more nuanced tasks like summarization, sentiment analysis, chatbot applications, and Question and Answers.


Babbage is a bit more capable than Ada but not quite as performant. It can perform all the same tasks as Ada, but it can also handle a bit more involved classification tasks, and it's well suited for semantic search tasks that rank how well documents match a search query.


Ada is usually the fastest model and least costly. It's best for less nuanced tasks—for example, parsing text, reformatting text, and simpler classification tasks. The more context you provide Ada, the better it will likely perform.

Content filtering model

To help prevent inappropriate completions, OpenAI provides a content filtering model that is fine-tuned to recognize potentially offensive or hurtful language.

Instruct models

These are models that are built on top of the Davinci and Curie models. Instruct models are tuned to make it easier to tell the API what you want it to do. Clear instructions can often produce better results than the associated core model.

A snapshot in time

A final note to keep in mind about all of the engines is that they are all a snapshot in time, meaning the data used to train them cuts off on the date the model was built. So, GPT-3 is not working with up-to-the-minute or even up-to-the-day data—it's likely weeks or months old. OpenAI is planning to add more continuous training in the future, but today this is a consideration to keep in mind.

All of the GPT-3 models are extremely powerful and capable of generating text that is indistinguishable from human-written text. This holds tremendous potential for all kinds of potential applications. In most cases, that's a good thing. However, not all potential use cases are good.

Understanding GPT-3 risks

GPT-3 is a fantastic technology, with numerous practical and valuable potential applications. But as is often the case with powerful technologies, with its potential comes risk. In GPT-3's case, some of those risks include inappropriate results and potentially malicious use cases.

Inappropriate or offensive results

GPT-3 generates text so well that it can seem as though it is aware of what it is saying. It's not. It's an AI system with an excellent language model—it is not conscious in any way, so it will never willfully say something hurtful or inappropriate because it has no will. That said, it can certainly generate inappropriate, hateful, or malicious results—it's just not intentional.

Nevertheless, understanding that GPT-3 can and will likely generate offensive text at times needs to be understood and considered when using GPT or making GPT-3 results available to others. This is especially true for results that might be seen by children. We'll discuss this more and look at how to deal with it in Chapter 6, Content Filtering

Potential for malicious use

It's not hard to imagine potentially malicious or harmful uses for GPT-3. OpenAI even describes how GPT-3 could be weaponized for misinformation campaigns or for creating fake product reviews. But OpenAI's declared mission is to ensure that artificial general intelligence benefits all of humanity. Hence, pursuing that mission includes taking responsible steps to prevent their AI from being used for the wrong purposes. So, OpenAI has implemented an application approval process for all applications that will use GPT-3 or the OpenAI API.

But as application developers, this is something we also need to consider. When we build an application that uses GPT-3, we need to consider if and how the application could be used for the wrong purposes and take the necessary steps to prevent it. We'll talk more about this in Chapter 10, Going Live with OpenAI-Powered Apps


In this chapter, you learned that GPT-3 is a general-purpose language model for processing virtually any language processing task. You learned how GPT-3 works at a high level, along with key terms and concepts. We introduced the available models and discussed how all GPT-3 applications must go through an approval process to prevent potentially inappropriate or harmful results.

In the next chapter, we'll discuss different ways to use GPT-3 and look at specific GPT-3 use case examples.

Left arrow icon Right arrow icon

Key benefits

  • Understand the power of potential GPT-3 language models and the risks involved
  • Explore core GPT-3 use cases such as text generation, classification, and semantic search using engaging examples
  • Plan and prepare a GPT-3 application for the OpenAI review process required for publishing a live application


Generative Pre-trained Transformer 3 (GPT-3) is a highly advanced language model from OpenAI that can generate written text that is virtually indistinguishable from text written by humans. Whether you have a technical or non-technical background, this book will help you understand and start working with GPT-3 and the OpenAI API. If you want to get hands-on with leveraging artificial intelligence for natural language processing (NLP) tasks, this easy-to-follow book will help you get started. Beginning with a high-level introduction to NLP and GPT-3, the book takes you through practical examples that show how to leverage the OpenAI API and GPT-3 for text generation, classification, and semantic search. You'll explore the capabilities of the OpenAI API and GPT-3 and find out which NLP use cases GPT-3 is best suited for. You’ll also learn how to use the API and optimize requests for the best possible results. With examples focusing on the OpenAI Playground and easy-to-follow JavaScript and Python code samples, the book illustrates the possible applications of GPT-3 in production. By the end of this book, you'll understand the best use cases for GPT-3 and how to integrate the OpenAI API in your applications for a wide array of NLP tasks.

What you will learn

Understand what GPT-3 is and how it can be used for various NLP tasks Get a high-level introduction to GPT-3 and the OpenAI API Implement JavaScript and Python code examples that call the OpenAI API Structure GPT-3 prompts and options to get the best possible results Select the right GPT-3 engine or model to optimize for speed and cost-efficiency Find out which use cases would not be suitable for GPT-3 Create a GPT-3-powered knowledge base application that follows OpenAI guidelines

Product Details

Country selected

Publication date : Aug 27, 2021
Length 296 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781800563193
Category :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details

Publication date : Aug 27, 2021
Length 296 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781800563193
Category :

Table of Contents

15 Chapters
Preface Chevron down icon Chevron up icon
1. Section 1: Understanding GPT-3 and the OpenAI API Chevron down icon Chevron up icon
2. Chapter 1: Introducing GPT-3 and the OpenAI API Chevron down icon Chevron up icon
3. Chapter 2: GPT-3 Applications and Use Cases Chevron down icon Chevron up icon
4. Section 2: Getting Started with GPT-3 Chevron down icon Chevron up icon
5. Chapter 3: Working with the OpenAI Playground Chevron down icon Chevron up icon
6. Chapter 4: Working with the OpenAI API Chevron down icon Chevron up icon
7. Chapter 5: Calling the OpenAI API in Code Chevron down icon Chevron up icon
8. Section 3: Using the OpenAI API Chevron down icon Chevron up icon
9. Chapter 6: Content Filtering Chevron down icon Chevron up icon
10. Chapter 7: Generating and Transforming Text Chevron down icon Chevron up icon
11. Chapter 8: Classifying and Categorizing Text Chevron down icon Chevron up icon
12. Chapter 9: Building a GPT-3-Powered Question-Answering App Chevron down icon Chevron up icon
13. Chapter 10: Going Live with OpenAI-Powered Apps Chevron down icon Chevron up icon
14. Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Top Reviews
No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial


How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to
  • To contact us directly if a problem is not resolved, use
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.