Search icon
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Apache Solr for Indexing Data
Apache Solr for Indexing Data

Apache Solr for Indexing Data: Enhance your Solr indexing experience with advanced techniques and the built-in functionalities available in Apache Solr

eBook
$29.99 $20.98
Print
$38.99
Subscription
$15.99 Monthly

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Black & white paperback book shipped to your address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Dec 28, 2015
Length 160 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781783553235
Category :
Concepts :
Table of content icon View table of contents Preview book icon Preview Book

Apache Solr for Indexing Data

Chapter 1. Getting Started

We will start this chapter with a quick overview of Solr, followed by a section that helps you get Solr up and running. We will also cover some basic building blocks of the Solr architecture, its directory structure, and its configurations files. This chapter covers following topics:

  • Overview and installation of Solr

  • Running Solr

  • The Solr architecture and directory structure

  • Multicore Solr

Overview and installation of Solr


Solr is the one of the most popular open source enterprise search platforms from the Apache Lucene open source project. Its features include full text search, faceted search, highlighting, near-real-time indexing, dynamic clustering, rich document handling, and geospatial search. Solr is highly reliable and scalable. This is the reason Solr powers the search features of the world's largest Internet sites, for example, Netflix, TicketMaster, SourceForge, and so on (source: https://wiki.apache.org/solr/PublicServers).

Solr is written in Java and runs as a standalone full text search server with a REST-like API. You feed documents into it (which is called indexing) via XML, JSON, CSV, and binary over HTTP. You query it through HTTP GET and receive XML, JSON, CSV, and binary results.

Let's go through the installation process of Solr. This section describes how to install Solr on various operating systems such as Mac, Windows, and Linux. Let's go through each of them one by one.

Installing Solr in OS X (Mac)

The easiest way to install Solr on OS X is by using homebrew. If you are not aware of homebrew and don't have homebrew installed on your Mac, then go to http://brew.sh/. Homebrew is the easiest way of installing packages/software on Mac.

You will require JRE 1.7 or above to install Solr on OS X. Just type java –version in the terminal and see what the version of JRE installed in your computer is. If it's less than 1.7, then you need to upgrade it to higher version and proceed with the following instructions.

Just type the following command in the terminal and it will automatically download all the files needed for Solr. Sit back and relax for a few minutes until it completes:

$ brew install Solr

Running Solr


To test whether your installation was completed successfully, you need to run Solr. Type these commands in the terminal to run it:

$ cd /usr/local/Cellar/solr/4.4.0/libexec/example/
$ java -jar start.jar

After you run the preceding commands, you will see lots of dumping messages/logs on the terminal. Don't worry! It's normal. Just try to fix any error if it is there. Once the messages are stopped and there is no error message, simply go to any web browser and type http://localhost:8983/solr/#/.

Tip

Downloading the example code

You can download the example code files from your account at http://www.packtpub.com for all the Packt Publishing books you have purchased. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you.

You will see following screen on your browser:

Fresh Solr do not contain any data. In Solr terminology, data is termed as a document. You will learn how to index data in Solr in upcoming chapters.

Installing Solr in Windows

There are multiple ways of installing Solr on a Windows machine. Here, I have explained the way to set up Solr with Jetty running as a service via NSSM:

  1. Install the latest Java JDK from http://www.oracle.com/technetwork/java/javase/downloads/index.html.

  2. Download the latest Solr release (ZIP version) from http://www.apache.org/dyn/closer.cgi/lucene/solr/. At the time of writing this book, the latest Solr release was 4.10.1.

  3. Unzip the Solr download. You should have files as shown in the following screenshot. Open the example folder.

  4. Copy the etc, lib, logs, solr, and webapps folders and start.jar to C:\solr (you will need to create the folder at C:\solr), as shown in the following screenshot:

  5. Now open the C:\solr\solr folder and copy the contents back to the root C:\solr folder. When you are done, you can delete the C:\solr\solr folder. See the following image, the selected folder you can delete now:

    At this point, your C:\solr directory should look like what is shown in the following screenshot:

  6. Solr can be run at this point if you start it from the command line. Change your directory to C:\solr and then run java -Dsolr.solr.home=C:/solr/ -jar start.jar.

  7. If you go to http://localhost:8983/solr/, you should see the Solr dashboard.

  8. Now Solr is up and running, so we can work on getting Jetty to run as a Windows service. Since Jetty comes bundled with Solr, all that we need to do is run it as a service. There are several options to do this, but the one I prefer is through Non-Sucking Service Manager (NSSM)program in windows which is the, the most compatible service manager across Windows environment. NSSM can be downloaded from http://nssm.cc/download.

  9. Once you have downloaded NSSM, open the win32 or win64 folder as appropriate and copy nssm.exe to your C:\solr folder.

  10. Open Command Prompt, change the directory to C:\solr, and then run nssm install Solr.

  11. A dialog will open. Select java.exe as the application located at C:\Windows\System32\.

  12. In the options input box, enter: Dsolr.solr.home=C:/solr/ -Djetty.home=C:/solr/ -Djetty.logs=C:/solr/logs/ -cp C:/solr/lib/*.jar;C:/solr/start.jar -jar C:/solr/start.jar.

  13. Click on Install service. You should get a service successfully installed message.

  14. Finally run net start Solr.

  15. Jetty should now be running as a service. Check this by going to http://localhost:/8983/solr/.

Installing Solr on Linux

To install Solr on Linux/Unix, you will need Java Runtime Environment (JRE) version 1.7 or higher. Then follow these steps:

  1. Download the latest Solr release (.tgz) from http://www.apache.org/dyn/closer.cgi/lucene/solr/. At the time of writing this book, the latest release was 4.10.1.

  2. Unpack the file to your desired location.

  3. Solr runs inside a Java servlet container, such as Tomcat, Jetty, and so on. Solr distribution includes a working demo server in the example directory, which runs in Jetty. You can use Jetty servlet container, or use your preferred servlet container. If you are using a servlet container other than Jetty and it's already running, then stop that server.

  4. Copy the solr-4.10.1.war file from the Solr distribution under the dist directory to the webapps directory of your servlet container. Change the name of this file; it must be named solr.war.

  5. Copy the Solr home directory, solr-4.x.0/example/solr/, from the distribution to your desired Solr home location.

  6. Start your servlet container, passing to it the location of your Solr home in one of these ways:

    1. Set the solr.solr.home Java system property to your Solr home (for example, using this example jetty setup: java -Dsolr.solr.home=/some/dir -jar start.jar).

    2. Configure the servlet container so that a JNDI lookup of java:comp/env/solr/home by the Solr web app will point to your Solr Home.

    3. Start the servlet container in the directory containing ./solr. The default Solr Home is solr under the JVM's current working directory ($CWD/solr).

  7. To confirm the installation, just go to http://localhost:/8983/solr/ and you will see the Solr dashboard. Now your Solr is up and running.

Thus, by the end of the installation, your Solr is up and running. But since we have not fed any data into Solr, it will not index any data. Let's try to insert some example data into our server.

The Solr download comes with example data bundled in it. We can use the same data for indexing as an example. Go to the exampledocs directory under the example directory. Here, you will see a lot of files. Now go to the command line (terminal) and type the following commands:

$ cd $SOLR_HOME/example/exampledocs/
$ ./post.sh vidcard.xml

Within the post.sh file, the script will call http://localhost:8983/solr/update using curl to post xml data from the vidcard.xml file. When the import completes (without any error), you will see a message that looks something like this:

Now let's try to check out our imported data from web browser. Try http://localhost:8983/solr/select?q=*:*&wt=json to fetch all of the data in your Solr instance, like this:

When you see the preceding data, it means that your Solr server is running properly and is ready to index your desired feed. You will be reading indexing in depth in upcoming chapters.

The Solr architecture and directory structure


In real-world scenarios, Solr runs with other applications on a web server. A typical example is an online store application. The store provides a user interface, a shopping cart, an items catalogue, and a way to make purchases. It needs to store this information some sort of database. Here, Solr makes easy so add the capability of searching data in the online store. To make data searchable, you need to feed it to Solr for indexing. Data can be fed to Solr in various ways and also in various formats, such as .pdf, .doc, .txt, and so on. In the process of feeding data to Solr, you need to define a schema. A schema is a way of telling Solr about data and how you want to make your data indexed. A lot many factors need to be considered while feeding data, which we will discuss in detail in upcoming chapters.

Solr queries are RESTful, which means that a Solr query is just a simple HTTP request and the response is a structured document, mainly in XML, but it could be JSON, CSV, or any other format as well based on your requirement. A typical architecture of Solr in the real world looks something like this:

Do not worry if you are not able to understand the preceding diagram right now. We will cover every component related to indexing in detail. The purpose of this diagram is to give you a feel of the current architecture of Solr and its working in the real world. If you see the preceding diagram properly, you will find two .xml files named schema.xml and solrconfig.xml. These are the two most important files in the Solr configuration and are considered the building blocks of Solr.

Solr directory structure

Here's the directory layout of a typical Solr Home directory:

| + conf 
|     - schema.xml 
|     - solrconfig.xml 
|     - stopwords.txt
|     - synonyms.txt etc
| + data 
|     - index 
|     - spellchecker

Let's get a brief understanding of solrconfig.xml and schema.xml here before we proceed further, as these are the building blocks of Solr (as stated earlier). We will cover them in detail in the next few chapters.

The solrconfig.xml file is the core configuration file of Solr, with most parameters affecting Solr itself directly. This file can be found in the solr/collection1/conf/ directory. When configuring Solr, you'll work with solrconfig.xml often. The file consists of a series of XML statements that set configuration values, and some of the most important configurations are:

  • Defining data dir (the directory where indexed files remain)

  • Request handlers (handle upcoming HTTP requests)

  • Listeners

  • Request dispatchers (used to manage HTTP communications)

  • Admin web interface settings

  • Replication and duplication parameters

These are some of the important configurations defined in solrconfig.xml. This file is well commented; I would advise you to go through it from the start and read all the comments. You will get a very good understanding of the various components involved in the Solr configuration.

The second most important configuration file is called schema.xml. This file can be found in the solr/collection1/conf/ directory. As the name says, this file is used to define the schema of the data (content) that you want to index and make searchable. Data is called document in Solr terminology. The schema.xml file contains all the details about the fields that your documents can contain, and how these fields should be dealt with when adding documents to the index or when querying those fields. This file can be divided broadly into two sections:

  • The types section (the definitions of all types)

  • The fields section (the definitions of the document structure using types)

The structure of your document should be defined as a field under the fields section. Let's say you have to define a book as a document in Solr with fields as isbn, title, author, and price. The schema will be as follows:

<field name="isbn" type="string" required="true" indexed="true" stored="true"/> <field name="title" type="text_general" indexed="true" stored="true"/>
<field name="author" type="text-general" indexed="true" stored="true" multiValued="true"/>
<field name="price" type="int" indexed="true" stored="true"/>

In the preceding schema, you see a type attribute, which defines the data type of the field. You can change the behavior of the field by changing the type. The multiValued attribute is used to tell Solr that the field can hold multiple values, while the required attribute makes the field mandatory for creating a document. After the fields section ends, we need to mention which field is going to be unique. In our case, it is going to be isbn:

<uniqueKey>isbn</uniqueKey>

The schema.xml file is also well-commented file. I will again advise you to go through the comments of this file, for starting this will help you understand the various field types and data types in detail.

Cores in Solr (Multicore Solr)


Solr cores make it possible to run multiple indexes with different configurations and schemas in a single Solr instance. The multicore feature of Solr helps in unified administration of Solr instances for complete and different applications. Cores in Solr are fairly isolated and have their own configuration and schema files. This helps manage cores at runtime (create or remove) from a Solr instance without restarting the process.

Cores in Solr are managed through a configuration file called solr.xml. The solr.xml file is present in your Solr Home directory. Since its inception, solr.xml has evolved from configuring one core to managing multiple cores and eventually defining parameters for SolrCloud. Do not worry much about SolrCloud if you are not aware of it, as we have a dedicated chapter that covers SolrCloud in detail. In brief, SolrCloud is a terminology used in distributed search and indexing. When we need to index huge amounts of data, we need to think of scalability and performance. This is where SolrCloud comes into the picture.

Starting from Solr 4.3, Solr will maintain two distinct formats for solr.xml; one is legacy and the other is discovery mode. The legacy format will be supported until the 4.x.0 series and it will be deprecated in the 5.0 release of Solr. The default solr.xml config file looks something like this:

<solr>

  <solrcloud>
    <str name="host">${host:}</str>
    <int name="hostPort">${jetty.port:8983}</int>
    <str name="hostContext">${hostContext:solr}</str>
    <int name="zkClientTimeout">${zkClientTimeout:30000}</int>
    <bool name="genericCoreNodeNames">${genericCoreNodeNames:true}</bool>
  </solrcloud>

  <shardHandlerFactory name="shardHandlerFactory"
    class="HttpShardHandlerFactory">
    <int name="socketTimeout">${socketTimeout:0}</int>
    <int name="connTimeout">${connTimeout:0}</int>
  </shardHandlerFactory>

</solr>

The preceding configuration shows that Solr configurations are SolrCloud friendly, but this does not mean that Solr is running in SolrCloud mode, unless you start Solr with some special parameters (explained in the SolrCloud Chapter 10, Distributed Indexing). To configure multiple cores in Solr in legacy format, you need to edit the solr.xml file with the following code snippet and remove the existing discovery code from solr.xml:

<solr persistent="false">
    <cores adminPath="/admin/cores" defaultCoreName="core1">
    <core name="core1" instanceDir="core1" />
    <core name="core2" instanceDir="core2" />
  </cores>
</solr>

Now you need to create two cores (new directories, core1 and core2) in the Solr directory. You also need to create Solr configuration files for new cores. To do this, just copy the same configuration files (the conf directory in collections1) in both cores for now and restart the Solr server after you have made these settings.

Once you restart the Solr server with the preceding configuration, two cores will be created, with names core1 and core2 and the existing default Solr configuration settings. The instanceDir variable defines the directory name relative to solr.xml—where to look for configuration and data files. You can modify the paths of these cores according to your wishes and the configuration files according to your use case. You can also change the names of the cores.

You can verify your settings by opening the following URL in your browser: http://localhost:8983/solr/.

You will see two new cores created in the Solr dashboard. Currently, there is no document in any of the cores because we have not indexed any data so far. So, this concludes the process of creating multiple cores in Solr.

Summary


Thus, by the end of the first chapter, you have learned what Solr is, how to install and run it on various operating systems, what the various components and basic building blocks of Solr are (such as its configuration files and directory structure), and how to set up configuration files. You also learned in brief about the architecture of Solr. In the last section, we covered multicore setup in the Solr 4.x.0 series. However, the legacy method of multicore setup is going to be deprecated in the Solr 5.x release and then it's going to be only discovery mode, which is called SolrCloud.

In the next chapter, we will look deeply into the various components used in Solr configuration files, such as tokenizers, analyzers, filters, field types, and so on.

Left arrow icon Right arrow icon

Key benefits

  • Learn about distributed indexing and real-time optimization to change index data on fly
  • Index data from various sources and web crawlers using built-in analyzers and tokenizers
  • This step-by-step guide is packed with real-life examples on indexing data

Description

Apache Solr is a widely used, open source enterprise search server that delivers powerful indexing and searching features. These features help fetch relevant information from various sources and documentation. Solr also combines with other open source tools such as Apache Tika and Apache Nutch to provide more powerful features. This fast-paced guide starts by helping you set up Solr and get acquainted with its basic building blocks, to give you a better understanding of Solr indexing. You’ll quickly move on to indexing text and boosting the indexing time. Next, you’ll focus on basic indexing techniques, various index handlers designed to modify documents, and indexing a structured data source through Data Import Handler. Moving on, you will learn techniques to perform real-time indexing and atomic updates, as well as more advanced indexing techniques such as de-duplication. Later on, we’ll help you set up a cluster of Solr servers that combine fault tolerance and high availability. You will also gain insights into working scenarios of different aspects of Solr and how to use Solr with e-commerce data. By the end of the book, you will be competent and confident working with indexing and will have a good knowledge base to efficiently program elements.

What you will learn

[*] Get to know the basic features of Solr indexing and the analyzers/tokenizers available [*] Index XML/JSON data in Solr using the HTTP Post tool and CURL command [*] Work with Data Import Handler to index data from a database [*] Use Apache Tika with Solr to index word documents, PDFs, and much more [*] Utilize Apache Nutch and Solr integration to index crawled data from web pages [*] Update indexes in real-time data feeds [*] Discover techniques to index multi-language and distributed data in Solr [*] Combine the various indexing techniques into a real-life working example of an online shopping web application

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Black & white paperback book shipped to your address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Buy Now

Product Details


Publication date : Dec 28, 2015
Length 160 pages
Edition : 1st Edition
Language : English
ISBN-13 : 9781783553235
Category :
Concepts :

Table of Contents

18 Chapters
Apache Solr for Indexing Data Chevron down icon Chevron up icon
Credits Chevron down icon Chevron up icon
About the Authors Chevron down icon Chevron up icon
About the Reviewers Chevron down icon Chevron up icon
www.PacktPub.com Chevron down icon Chevron up icon
Preface Chevron down icon Chevron up icon
Getting Started Chevron down icon Chevron up icon
Understanding Analyzers, Tokenizers, and Filters Chevron down icon Chevron up icon
Indexing Data Chevron down icon Chevron up icon
Indexing Data – The Basic Technique and Using Index Handlers Chevron down icon Chevron up icon
Indexing Data with the Help of Structured Datasources – Using DIH Chevron down icon Chevron up icon
Indexing Data Using Apache Tika Chevron down icon Chevron up icon
Apache Nutch Chevron down icon Chevron up icon
Commits, Real-Time Index Optimizations, and Atomic Updates Chevron down icon Chevron up icon
Advanced Topics – Multilanguage, Deduplication, and Others Chevron down icon Chevron up icon
Distributed Indexing Chevron down icon Chevron up icon
Case Study of Using Solr in E-Commerce Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Filter icon Filter
Top Reviews
Rating distribution
Empty star icon Empty star icon Empty star icon Empty star icon Empty star icon 0
(0 Ratings)
5 star 0%
4 star 0%
3 star 0%
2 star 0%
1 star 0%

Filter reviews by


No reviews found
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela