Data integration is important because it creates the groundwork for obtaining insightful conclusions in the field of data management and analysis. In today’s data-driven world, the capacity to quickly collect and harmonize data, which is constantly expanding in volume, diversity, and complexity, from diverse sources is critical.
This chapter will go into the concept of data integration, delving into its principles, importance, and implications for your day-to-day work in our increasingly data-centric world.
We will go through the following topics:
Data integration is the process of combining data from multiple sources to assist businesses in gaining insights and making educated decisions. In the age of big data, businesses generate vast volumes of structured and unstructured data regularly. To properly appreciate the value of this information, it must be incorporated in a format that enables efficient analysis and interpretation.
Take the example of extract, transform, and load (ETL) processing, which consists of multiple stages, including data extraction, transformation, and loading. Extraction entails gathering data from various sources, such as databases, data lakes, APIs, or flat files. Transformation involves cleaning, enriching, and transforming the extracted data into a standardized format, making it easier to combine and analyze. Finally, loading refers to transferring the transformed data into a target system, such as a data warehouse, where it can be stored, accessed, and analyzed by relevant stakeholders.
The data integration process not only involves handling different data types, formats, and sources, but also requires addressing challenges such as data quality, consistency, and security. Moreover, data integration must be scalable and flexible to accommodate the constantly changing data landscape. The following figure depicts the scope for data integration.
Figure 2.1 – Scope for data integration
Understanding data integration as a process is critical for businesses to harness the power of their data effectively.
Warning
Data integration should not be confused with data ingestion, which is the process of moving and replicating data from various sources and loading it into the first step of the data layer with minimal transformation. Data ingestion is a necessary but not sufficient step for data integration, which involves additional tasks such as data cleansing, enrichment, and transformation.
A well-designed and well-executed data integration strategy can help organizations break down data silos, streamline data management, and derive valuable insights for better decision-making.
Data integration is critical in today’s data-driven enterprises and cannot be understated. As organizations rely more on data to guide their decisions, operations, and goals, the ability to connect disparate data sources becomes increasingly important. The following principles emphasize the importance of data integration in today’s data-driven enterprises.
Data integration is critical in today’s competitive business market for firms trying to leverage the power of their data and make educated decisions. Breaking down data silos is an important part of this process since disconnected and unavailable data can prevent cooperation, productivity, and the capacity to derive valuable insights. Data silos often arise when different departments or teams within an organization store their data separately, leading to a lack of cohesive understanding and analysis of the available information. Data integration tackles this issue by bringing data from several sources together in a centralized area, allowing for smooth access and analysis across the enterprise. This not only encourages greater team communication and collaboration but also builds a data-driven culture, which has the potential to greatly improve overall business performance.
Another aspect of data integration is streamlining data management, which simplifies data handling processes and eliminates the need to manually merge data from multiple sources. By automating these processes, data integration reduces the risk of errors, inconsistencies, and duplication, ensuring that stakeholders have access to accurate and up-to-date information, which allows organizations to make more informed decisions and allocate resources more effectively.
One additional benefit of data integration is the ability to acquire useful insights in real time from streaming sources such as Internet of Things (IoT) devices and social media platforms. As a result, organizations may react more quickly and efficiently to changing market conditions, consumer wants, and operational issues. Real-time data can also assist firms in identifying trends and patterns, allowing them to make proactive decisions and remain competitive.
Taking into consideration the importance of a good decision for the company, it is important to enhance customer experiences by integrating data from various customer touchpoints. In this way, businesses can gain a 360-degree view of their customers, allowing them to deliver personalized experiences and targeted marketing campaigns. This can lead to increased customer satisfaction, revenue, and loyalty.
In the same way, quality improvement involves cleaning, enriching, and standardizing data, which can significantly improve its quality. High-quality data is essential for accurate and reliable analysis, leading to better business outcomes.
Finally, it is necessary to take into consideration the aspects of governance and compliance with the laws. Data integration helps organizations maintain compliance with data protection regulations, such as the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA). By consolidating data in a centralized location, businesses can more effectively track, monitor, and control access to sensitive information.
Effective data integration enables businesses to gain a comprehensive view of their data, which is needed for informed decision-making. By combining data from various sources, organizations can uncover hidden patterns, trends, and insights that would have been difficult to identify otherwise.
Furthermore, with data integration, you allow organizations to combine data from different sources, enabling the discovery of new insights and fostering innovation.
The following figure depicts the position of data integration in modern business.
Figure 2.2 – The position of data integration in modern business
Companies can leverage these insights to develop new products, services, and business models, driving growth and competitive advantage.
The topics surrounding data are quite vast, and it is very easy to get lost in this ecosystem. We will attempt to clarify some of the terms currently used that may or may not be a part of data integration for you:
By differentiating data integration from these related data management practices, we can better understand its unique role in the modern data stack. Data integration is vital for businesses to derive valuable insights from diverse data sources, ensuring that information is accurate, up to date, and readily accessible for decision-making.
Data integration is a complex process that requires enterprises and data services to tackle various challenges to effectively combine data from multiple sources and create a unified view.
As an organization’s size increases, so does the variety and volume of data, resulting in greater technical complexity. Addressing this challenge requires a comprehensive approach to ensure seamless integration across all data types:
Industry good practice
To overcome technical challenges such as data heterogeneity, volume, and latency, organizations can leverage cloud-based technologies that offer scalability, flexibility, and speed. Cloud-based solutions can also reduce infrastructure costs and maintenance efforts, allowing organizations to focus on their core business processes.
Once data capture is implemented, preferably during the setup process, maintaining data integrity becomes important to ensure accurate decision-making based on reliable indicators. Additionally, it’s essential to guarantee that the right individuals have access to the appropriate data:
Note
Data quality is a crucial aspect of data integration, as poor data quality can negatively impact the insights derived from the integrated dataset. Organizations should implement data quality tools and techniques to ensure that their data is accurate, complete, and consistent throughout the integration process.
Implementing and sustaining a comprehensive data integration platform requires the establishment, accumulation, and preservation of knowledge and skills over time:
Enterprises may establish effective data integration strategies and realize the full potential of their data assets by understanding and tackling these problems. Implementing strong data integration processes will allow firms to gain useful insights and make better decisions.
Tip
To address knowledge challenges such as integration complexity and resource constraints, organizations can use user-friendly and collaborative tools that simplify the design and execution of data integration workflows. These tools can also help reduce the dependency on expert staff and enable non-technical users to access and use data as needed.
The modern data stack is a combination of tools, technologies, and platforms that are designed to simplify the process of extracting, converting, and loading data from several sources into a centralized storage system. The stack components are generally chosen to fit the company’s needs exactly, hence promoting simplicity in addition to being cost effective. This stack enables businesses to manage, analyze, and gain insights from their data to make educated decisions. The current data stack’s components can be broadly classified in the following figure.
Figure 2.3 – Overview of the modern data stack
Initially, it is essential to identify the components encompassing the recognition, capturing, and measurement of data integrity for the information being integrated into the data platform. The modern data stack, with its multitude of components, provides organizations with a flexible and scalable framework for managing and deriving value from their data. By adopting the right tools, technologies, and platforms, organizations can create a powerful data ecosystem that supports their data-driven decision-making and business objectives.
The data stack starts with the data sources, which can include relational databases, NoSQL databases, flat files, APIs, or data streams generated by sensors or devices. These sources are responsible for producing the raw data that will be ingested, processed, and stored within the modern data stack.
Tip
Data sources are the starting point of the modern data stack, providing the raw data that will be ingested, processed, and stored within the stack. Organizations should identify and evaluate their existing and potential data sources to determine their relevance, quality, and availability for their business objectives.
Data ingestion refers to the process of moving and replicating data from various sources and loading it into the first step of the data layer with minimal transformation. Data ingestion can be used with real-time streaming, change data capture, APIs, or batching. Ingestion is the first step to ensure a smooth and efficient data transfer process. Tools such as Airbyte or Fivetran can help build this layer.
The modern data stack includes various storage technologies for managing and storing data. Various storage options exist, ranging from solutions that primarily provide efficient storage in terms of performance and non-specialized redundancy in the analytical aspect but are capable of adapting to different situations, to more specialized solutions offering high performance during data intersections required for various layers such as a data warehouse. The choice of data storage depends on the organization’s specific requirements and the type of data being managed. Technologies such as MinIO, Ceph, or Scality, which are distributed object storage systems compliant with S3 API, can be a good foundation for the storage layer.
Data transformation is the process of combining data from different sources and creating a unified view. This process involves data cleansing, validation, enrichment, and transformation (filter, mapping, lookup, aggregate, and so on) to ensure data consistency and quality. At this stage, data transformation plays a crucial role. It facilitates the transfer and synchronization of various data types and formats between systems and applications. This step is commonly called data integration. Compute engines such as dbt or Spark can help process your data.
Note
Transformation is a key component of the modern data stack, as it ensures that the ingested data is consistent and standardized for analysis and consumption. Organizations should define their transformation logic and rules based on their business requirements and target system specifications.
Data consumption can take various forms, with different methods employed to analyze and visualize information for distinct purposes. Three common approaches to data consumption include reporting/dashboarding, data science, and enterprise performance management (EPM).
Reporting and dashboarding are essential tools for organizations to effectively monitor their performance and make data-driven decisions. Reports provide structured and detailed information on various aspects of a business, while dashboards offer a visual representation of key performance indicators (KPIs) and metrics, allowing stakeholders to quickly grasp the overall health of the organization. The usage of technologies such as Tableau software combined with Presto-based solutions can help achieve that.
EPM is a comprehensive approach to company planning, consolidation, and reporting. EPM entails combining several management procedures, such as budgeting, forecasting, and financial analysis, to improve an organization’s overall performance. EPM assists businesses in achieving their goals and maintaining a competitive edge in the market by connecting business strategies with operational procedures.
Data science is an interdisciplinary field that combines cutting-edge tools and algorithms to extract insights from huge and complicated databases. Data scientists use techniques such as machine learning, statistical modeling, and artificial intelligence to forecast future trends, uncover patterns, and optimize business processes, allowing firms to make more informed strategic decisions.
Tip
Consumption is the ultimate goal of the modern data stack, as it enables organizations to analyze and visualize their integrated data for various purposes. Organizations should choose the appropriate tools and methods for data consumption based on their analytical needs and capabilities.
Workflow management and monitoring ensure a seamless execution of processes and timely delivery of accurate information. Workflow management focuses on designing, automating, and coordinating the various tasks, streamlining the process, and minimizing the risk of errors. On the other hand, monitoring upholds the effectiveness and dependability of data integration workflows. By continuously tracking the progress of data integration tasks, monitoring helps identify potential bottlenecks, performance issues, and data discrepancies. This real-time oversight allows organizations to proactively address problems and ensure data quality.
The set of policies, methods, and practices that regulate data collection, storage, and use is known as data governance. It tackles issues such as data quality, security, privacy, and compliance in order to ensure that data is accurate, consistent, and accessible to authorized users. A well-executed data governance structure can assist firms in maintaining data trust, reducing risks, and improving decision-making capabilities.
Observability, on the other hand, refers to the ability to monitor and comprehend the many components of a data ecosystem. It is necessary to monitor and visualize metrics, logs, and traces in order to get insight into the performance, dependability, and functionality of data pipelines, systems, and applications. Effective observability enables organizations to proactively identify and fix issues, maximize resource utilization, and ensure continuous data flow across their infrastructure. Observability, as opposed to monitoring, is concerned with the quality and consumption of data within the organization rather than technological factors. In many cases, tools such as DataHub can be very helpful in implementing observability.
Cloud-based technologies have played a significant role in shaping the modern data stack, providing organizations with greater flexibility, scalability, and cost effectiveness compared to traditional on-premises solutions. Nonetheless, the cloud strategy is not limited to the public cloud but can also be implemented through various solutions within the private cloud. The following points highlight the importance of cloud-based technologies in the modern data stack:
The paradigm has clearly shifted, as cloud-based technologies have transformed the modern data stack, offering businesses the flexibility, scalability, and cost effectiveness required to manage their data assets effectively. Organizations may build a robust, agile, and secure data stack that supports data-driven decision-making and business goals by implementing cloud-based solutions.
Over the years, the data stack has evolved significantly, shifting from traditional on-premises solutions to cloud-based technology. The necessity to manage rapidly growing volumes of data, as well as the growing need for real-time data processing and analytics, has fueled this change.
Figure 2.4 – Evolution of data stack
In the early days of data management, organizations primarily relied on monolithic, on-premises solutions such as relational databases and data warehouses. These systems were designed to handle structured data and were often limited in terms of scalability, flexibility, and integration capabilities. Data integration and processing tasks were typically performed using ETL processes, which were often time consuming and resource intensive.
The advent of big data technologies, such as Hadoop and NoSQL databases, marked a significant shift in the data stack landscape. These technologies were designed to handle large volumes of unstructured and semi-structured data, providing organizations with the ability to process and analyze diverse data sources. The implementation of distributed processing systems has significantly enhanced the handling and examination of large-scale data collections.
With the growing need to store and process various types of data, data lakes emerged as a popular alternative to traditional data warehouses. Data lakes are large-scale storage repositories that can store raw, unprocessed data in its native format, offering greater flexibility and scalability. Organizations began adopting data lake architectures to accommodate the diverse data types and sources they were working with, enabling them to perform more advanced analytics and derive deeper insights.
As cloud computing gained popularity, businesses began to use cloud-based services to construct and manage their data stacks. The cloud had various advantages over traditional options, including nearly limitless scalability, cost effectiveness, and access to a diverse set of tools and services. Cloud-based data storage solutions grew in popularity as a means of storing data on the cloud, while managed services offered scalable data warehousing and analytics capabilities.
The modern data stack draws upon the cumulative advancements of previous iterations, harnessing the best aspects of each stack to deliver an optimized solution. This modern approach to data management is highly versatile, assuring its relevance and adaptability in today’s fast-changing technological scene. The introduction of IoT is a crucial development that has altered the modern data stack. With billions of connected devices across the world continuously producing large volumes of data, IoT has spurred the demand for efficient and scalable streaming solutions. These systems are specifically intended to handle real-time data processing, allowing enterprises to make more educated decisions based on current facts. The modern data stack also stresses data quality, governance, and security, ensuring that enterprises can trust and successfully manage their data.
Adopting a modern data stack approach brings numerous benefits to organizations, allowing them to leverage the latest technologies and best practices in data management, integration, and analytics. Some of the key benefits of embracing a modern data stack include the following:
Adopting a modern data stack approach offers organizations numerous benefits, including scalability, flexibility, agility, cost efficiency, improved data quality, real-time analytics, and ease of use. By embracing the modern data stack, organizations can build a robust and agile data infrastructure that supports their data-driven decision making and business objectives.
Next, we’ll discuss culture and strategy.
In today’s corporate environment, data has become a critical tool for firms seeking to obtain insights, make educated decisions, and maintain a competitive edge. Companies must understand their existing and future data cultures, as well as develop a well-defined data strategy, in order to properly harness the power of their data. There are different techniques for data management, each with advantages and disadvantages. This section will look at several data management strategies, the concept of data-centricity, and how businesses might use data as a service, product, or mesh.
As you embark on efforts concerning a company’s data platform or systems, a crucial first step involves evaluating the existing and desired state of the organization’s data culture mindset. Here are some of the various data-centric cultures.
Data anarchy refers to a situation where business operations professionals, dissatisfied with their IT department’s support, create and manage their own unofficial databases or “shadow IT.” In this approach, data is scattered across various systems, departments, and individuals without any centralized control or governance. While data anarchy provides flexibility and autonomy to individual teams, it can lead to inconsistency, duplication, and data silos, making it difficult to obtain a comprehensive view of the organization’s data landscape.
Data monarchy centralizes data management under a single authority, often the IT department or a dedicated data team. This approach ensures data consistency and standardization but can hinder agility and slow down data access for business users, who often have to rely on the central authority for data requests.
In a data aristocracy, multiple data stewards from different departments share the responsibility of managing data. This approach balances centralization and decentralization, allowing for better collaboration and data sharing across the organization while maintaining some level of control and governance.
Data democracy empowers all employees to access and use data as needed for their job functions. This approach fosters a data-driven culture, encourages innovation, and improves decision-making across the organization. However, it requires robust data governance policies and practices to ensure data quality, security, and compliance.
Data currently exists in your organization, and your objective is to foster increased commitment and enhanced structuring. Under these circumstances, it is advantageous to establish the approach or approaches related to data administration:
Organizations must carefully assess their data management needs, internal capabilities, and strategic goals to determine the best approach for their data company strategy depending on their current and future data cultures. By adopting a data-centric mindset and leveraging innovative data models such as DaaS, DaaP, and data mesh, companies can harness the full potential of their data to drive growth, innovation, and success in today’s competitive business environment.
Next, we’ll discuss data integration techniques, tools, and technologies.
Data integration is a complex process that necessitates the use of numerous tools and technologies to extract, transform, and load data from diverse sources into a centralized location. In this chapter, we will go over some of the most important data integration tools and technologies, including open source and commercial solutions, as well as criteria to consider when choosing the correct tools and technologies for your data integration project.
Data integration is an essential process for organizations that need to consolidate data from disparate sources to gain insights and make informed decisions. However, the process can be heterogeneous, especially when dealing with large volumes of data from different sources. Data integration involves extracting data from multiple sources, transforming it into a consistent format, and loading it into a central location. To achieve this, organizations need to use various tools and technologies that can help them streamline the process and ensure data quality, ranging from open source solutions to commercial tools.
Data integration architectures outline the diverse approaches to processing and transferring data from source to target systems. These methods can be mixed and matched according to specific requirements, considering that the slowest transformation method will impact the overall processing time.
Batch processing, for example, which involves gathering and processing data in big batches at regular intervals, is appropriate for large-scale projects when data latency is not an issue. Micro-batching is a batch processing variation that works with smaller batches at shorter intervals and is suited for applications that require minimal latency but not real-time processing. Real-time processing, on the other hand, is perfect for projects that require low latency and data to be processed and evaluated quickly. Incremental processing is appropriate for cases in which enormous amounts of data are generated but only a small fraction of the data changes over time, hence lowering processing time and coherence.
Data integration patterns, such as extract, load, and transform (ELT), refer to the different ways data is transformed and loaded into the target system. ETL is a traditional approach that is batch oriented and suitable for projects where data quality and transformations are complex. In contrast, ELT is a modern method that leverages the processing power of the target system and is appropriate for projects where data transformations are relatively simple.
There are various tools and technologies available for data integration, each with its own advantages and limitations. Here are some of the key tools and technologies used for data integration.
ETL tools are software applications that automate the ETL process; they can be code-based or graphical user interface (GUI) based. The tools help design and execute ETL workflows, map data elements between sources, and transform the data. ETL tools can be on-premises or cloud-based, and they may be commercial or open source.
The following screenshot shows the ETL steps:
Figure 2.5 – ETL steps
The first step involves extracting data from source systems. The ETL tool connects to these sources using connectors or APIs. During extraction, the tool reads the data. The second step is the most complex one; it is the step where data is transformed into a suitable format/model for analysis. This step includes operations such as cleaning, normalization, enrichment, and filtering. The third and last step is loading into the target storage system, such as a data lake or data warehouse.
Data integration middleware is software that provides a standardized interface for data exchange between different applications, databases, and platforms. Data integration middleware can handle complex data transformations, and it can also provide advanced features such as data quality, data governance, and data security. Middleware can take many forms, the most common being an enterprise service bus (ESB). It can be used to integrate different applications, such as customer relationship management (CRM) and enterprise resource planning (ERP) systems, to enable interoperability and facilitate data exchange.
The following screenshot shows the data integration middleware:
Figure 2.6 – Data integration middleware
Data integration middleware can be used to move data between source systems and a central data repository.
Cloud-based integration platforms provide a cloud-based infrastructure for data integration, enabling organizations to access and integrate data from different sources and applications. Cloud-based integration platforms can be more cost effective than on-premises solutions, and they can also provide scalability and flexibility.
The following screenshot shows cloud-based integration platforms:
Figure 2.7 – Cloud-based integration platforms
Here’s an overview of how these platforms typically operate:
Data virtualization tools represent a modern approach to data management; they enable organizations to access and integrate data from different sources and applications without physically moving or replicating the data. Data virtualization tools can provide real-time access to data, and they can also reduce data replication and storage costs. These tools stand out for their ability to provide real-time data access and reduce costs related to data replication and storage. The operation of data virtualization tools involves several key steps. The following screenshot shows data virtualization tools:
Figure 2.8 – Data virtualization tools
Here’s an overview of how these platforms typically operate:
This real-time process eliminates the need for data replication or physical movement, leading to significant reductions in storage costs and complexities associated with maintaining data consistency. Consequently, data virtualization tools offer a flexible, efficient, and cost effective means for data integration, empowering organizations to utilize their data assets more effectively for analytics and informed decision-making.
They can be implemented in different types:
Data quality tools improve the accuracy, consistency, completeness, and integrity of data during integration processes and work by offering a suite of functionalities. Here’s a succinct overview of the main steps in maintaining and enhancing the quality of data:
These tools automate many aspects of the data quality process, significantly reducing manual effort and the likelihood of human error, and they are integral in ensuring that the data used for analysis and decision-making is reliable and valuable. These tools can be coding-based tools, configuration-based tools, and/or UI-based tools.
Data integration tools and technologies can be commercial or open source. Commercial tools require the acquisition of a usage right to have unrestricted access, including current and future features, and to ensure vendor-based support. Open source solutions are supported by the community and sometimes by service companies. Therefore, the associated cost is directly related to the need for services within the scope of using these solutions.
When selecting the right data integration tools and technologies for your data integration project, you need to consider several factors, including the size and complexity of the data sources, the target system’s processing power, the data integration project’s requirements, and the organization’s budget and IT expertise. Here are some factors to consider when selecting tools and technologies.
The following table lists the factors to consider when selecting tools and technologies.
Category |
Criteria |
Description |
Adaptive architecture |
Scalability |
Tools should manage current data volumes and expand for future growth. |
Flexibility |
Solutions must support various data formats, structures, and diverse transformations. |
|
Integration capabilities |
Ensure tools integrate with various sources and platforms, providing standardized interfaces. |
|
Support and maintenance |
Choose tools backed by reliable vendors with strong support and maintenance. |
|
Governance and compliance |
Data governance features |
Tools should include data lineage, stewardship, and cataloging for consistent, compliant management. |
Security features |
Prioritize tools with robust security measures such as encryption, access controls, and auditing. |
|
Data quality features |
Look for tools ensuring data quality through profiling, cleansing, and enrichment. |
|
Company strategy |
Company cultures |
Consider tools aligning with the organization’s values and practices. |
Team expertise |
Select tools matching team skills or those with manageable learning curves to reduce training costs. |
|
Ease of use |
Prefer tools with user-friendly interfaces for all users. |
|
Cost |
Tools should be affordable, considering license fees, implementation, and maintenance costs against the budget. |
Table 2.1 – Factors to consider when selecting tools and technologies
When selecting the most suitable data integration tools and technologies for your endeavor, it’s vital to take into account a range of aspects, such as data source size and intricacy, the target system’s processing capabilities, project necessities, and the organization’s financial resources and technical know-how. Moreover, assessing adaptive architecture, governance and compliance, and corporate strategy is essential. By carefully examining these elements, you can make sure that the chosen tools and technologies correspond to your organization’s cultural environment, team expertise, and financial limitations, ultimately resulting in a successful data integration project tailored to your specific needs and objectives.
Data integration tools and technologies are required for enterprises to maximize the value of their data assets. Organizations may choose the best approach for their data integration project and achieve their data integration goals effectively and efficiently by understanding the various tools and technologies available for data integration. The amount and complexity of the data sources, the target system’s processing power, the requirements of the data integration project, and the organization’s budget and IT competence all influence the choice of data integration tools and technologies. Organizations can guarantee that their data is integrated accurately, efficiently, and securely by adopting the right data integration tools and technologies, allowing them to make data-driven decisions and achieve a competitive advantage in the data-driven business landscape.
In the next chapter, we’ll delve into the evolution of data integration and investigate the diverse structures established to cater to requirements at every phase.
Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.
If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.
Please Note: Packt eBooks are non-returnable and non-refundable.
Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:
If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:
Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.
You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.
Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.
When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.
For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.