The Data and Analytics Journey So Far
We have been surrounded by digital data for almost a century now and every decade has had its unique challenges regarding how to get the best value out of that data. But these challenges were narrow in scope and manageable since the data itself was manageable. Even though data was rapidly growing in the 20th century, its volume, velocity, and variety were still limited in nature. And then we hit the 21st century and the world of data drastically changed. Data started to exponentially grow due to multiple reasons:
- The adoption of the internet picked up speed and data grew into big data
- Smartphone devices became a common household entity and these devices all generated tons of data
- Social media took off and added to the deluge of information
- Robotics, smart edge devices, industrial devices, drones, gaming, VR, and other artificial intelligence-driven gadgets took the growth of data to a whole new level.
However, across all this, the common theme that exists even today is that data gets produced, processed, stored, and consumed.
Now, even though the history of data and analytics goes back many decades, I don’t want to dig everything up. Since this book revolves around cloud computing technologies, it is important to understand how we got here, what systems were in place in the on-premises data center world, and why those same systems and the architectural patterns surrounding them struggle to cater to the business and technology needs of today.
In this prologue, we will cover the following main topics:
- Introduction to the data and analytics journey
- Traditional data platforms
- Challenges with on-premises data systems
- What this book is all about
If you are already well versed with the traditional data platforms and their challenges, you can skip this introduction and directly jump to Chapter 1.
Introduction to the data and analytics journey
The online transaction processing (OLTP) and online analytical processing (OLAP) systems worked great by themselves for a very long time when data producers were limited, the volume of data was under control, and data was mostly structured in tabular format. The last 20 years have seen a seismic shift in the way new businesses and technologies have come up.
As the volume, velocity, and variety of data started to pick steam, data grew into big data and the data processing techniques needed a major rehaul. This gave rise to the Apache Hadoop framework, which changed the way big data was processed and stored. With more data, businesses wanted to get more descriptive and diagnostic analytics out of their data. At the same time, another technology was gaining rapid traction, which gave organizations hope that they could look ahead to the future and predict what may happen in advance so that they could take immediate actions to steer their businesses in the right direction. This was made possible by the rise of artificial intelligence and machine learning and soon, large organizations started investing in predictive analytics projects.
And while we were thinking that we got the big data under control with new frameworks, the data floodgates opened up. The last 10 to 15 years have been revolutionary with the onset of smart devices, including smartphones. Connectivity among all these devices and systems made data grow exponentially. This was termed the Internet of Things (IoT). And to add to the complexity, these devices started to share data in near real time, which meant that data had to be streamed immediately for consumption. The following figure highlights many of the sources from where data gets generated. A lot of insights can be derived from all this data so that organizations can make faster and better decisions:
Figure 00.1 – Big data sources
This also meant that organizations started to carefully segregate their technical workforce to deal with data in personas. The people processing big data came to be known as data engineers, the people dealing with data for future predictions were the data scientists, and the people analyzing the data with various tools were the data analysts. Each type of persona had a well-defined task and there was a strong desire to create/purchase the best technological tool out there to make their day-to-day lives easier.
From a data and analytics point of view, systems started to grow bigger with extra hardware. Organizations started to expand their on-premises data centers with the latest and greatest servers out there to process all this data as fast as possible, to create value for their businesses. However, a lot of architecture patterns for data and analytics remained the same, which meant that many of the old use cases were still getting solved. However, with new demands from these businesses, pain areas started popping up more frequently.
Traditional data platforms
Before we get into architecting data platforms in a modern way, it is important to understand the traditional data platforms and know their strengths and limitations. Once we understand the challenges of traditional data platforms in solving new business use cases, we can design a modern data platform in a holistic matter.
Throughout the 1980s and 1990s, the three-tier architecture became a popular way of producing, processing, and storing data. Almost every organization used this pattern as it met the business needs with ease. The three tiers of this architecture were the presentation tier, the application tier, and the data tier:
- The presentation tier was the front-facing module and was created either as a thick client – that is, software was installed on the client’s local machine – or as a thin client – that is, a browser-based application.
- The application tier would receive the data from the presentation tier and process this data with business logic hosted on the application server.
- The data tier was the final resting place for the business data. The data tier was typically a relational database where data was stored in rows and columns of tables.
Figure 00.2 represents a typical three-tier architecture:
Figure 00.2 – A traditional three-tier architecture pattern
This three-tier architecture worked well to meet the transactional nature of businesses. To a certain extent, this system was able to help with creating a basic reporting mechanism to help organizations understand what was happening with their business. But the kind of technology used in this architecture fell short of going a step further – to identify and understand why certain things were happening with their business. So, a new architecture pattern was required that could decouple this transitional system from the analytics type of operations. This paved the way for the creation of an enterprise data warehouse (EDW).
Enterprise data warehouse (EDW)
The need for a data warehouse came from the realistic expectations of organizations to derive business intelligence out of the data they were collecting so that they could get better insights from this data and make the necessary adjustments to their business practices. For example, if a retailer is seeing a steady decline in sales from a particular region, they would want to understand what is contributing to this decline.
Now, let’s capture the data flow. All the transactional data is captured by the presentation tier, processed by the application tier, and stored in the data tier of the three-tier architecture. The database behind the data tier is always online and optimized for processing a large number of transactions, which come in the form of
DELETE statements. This database also emphasizes fast query processing while maintaining atomicity, consistency, isolation, and durability (ACID) compliance. For this reason, this type of data store is called OLTP.
To further analyze this data, a path needs to be created that will bring the relevant data over from the OLTP system into the data warehouse. This is where the extract, transform, and load (ETL) layer comes into the picture. And once the data has been brought over to the data warehouse, organizations can create the business intelligence (BI) they need via the reporting and dashboarding capabilities provided by the visualization tier. We will cover the ETL and BI layers in detail in later chapters, but the focus right now is walking through the process and the history behind them.
The data warehouse system is distinctly different from the transactional database system. Firstly, the data warehouse does not constantly get bombarded by transactional data from customer-facing applications. Secondly, the types of operations that are happening in the data warehouse system are specific to mining information insights from all the data, including historical data. Therefore, this system is constantly doing operations such as data aggregation, roll-ups (data consolidation), drill-downs, and slicing and dicing the data. For this reason, the data warehouse is called OLAP.
Figure 00.3 – The OLTP and OLAP systems working together
The preceding diagram shows all the pieces together. This architectural pattern is still relevant and works great in many cases. However, in the era of cloud computing, business use cases are also rapidly evolving. In the following sections, we will take a look at variations of this design pattern, as well as their advantages and shortcomings.
Bottom-up data warehouse approach
Ralph Kimball, one of the original architects of data warehousing, proposed the idea of designing the data warehouse with a bottom-up approach. This involved creating many smaller purpose-built data marts inside a data warehouse. A data mart is a subset of the larger data warehouse with a focus on catering to use cases for a specific line of business (LOB) or a specific team. All of these data marts can be combined to form an enterprise-wide data warehouse. The design of data marts is also kept simple by having the data model as a star schema to a large extent. A star schema keeps the data in sets of denormalized tables. There are known as fact tables, and they store all the transactional and event data. Since these tables store all the fast-moving granular data, they accumulate a large number of records over a short period. Then, there are the dimension tables, which typically store characteristics data such as details about people and organizations, product information, geographical information, and so forth. Since such information doesn’t rapidly get produced or changed over a short period, compared to fact tables, dimension tables are relatively smaller in terms of the number of records stored. The following figure shows a bottom-up EDW design approach where individual data marts contribute toward a bigger data warehouse:
Figure 00.4 – Bottom-up EDW design
Benefits of the bottom-up approach
- The EDW gets systemically built over a certain period with business-specific groupings of data marts.
- The data model’s design is typically created via star schemas, which makes the model denormalized in nature. Some data becomes redundant in this approach but overall, it helps in making the data marts perform better.
- An EDW is easier to create since the time taken to set up individual business-specific data marts is shorter compared to setting up an enterprise-wide warehouse.
- An EDW that contains data marts also makes it better suited for setting up data lakes. We will cover everything about data lakes in subsequent chapters.
Shortcomings of the bottom-up approach
- It is challenging to achieve a fully harmonized integration layer because the EDW is purpose-built for each use case in the form of data marts. Data redundancy also makes it difficult to create a single source of truth.
- Normalized schemas create data redundancy, which makes the tables grow very large. This slows down the performance of ETL job pipelines.
- Since the data marts are tightly coupled to the specific business use cases, managing structural changes and their dependencies on the data warehouse becomes a cumbersome process.
Top-down data warehouse approach
Bill Inmon, widely recognized as the father of data warehouses, proposed the idea of designing the data warehouse with a top-down approach. In this approach, a single source of truth for the data in the form of an EDW is constructed first using a normalized data model to reduce data redundancy. Data from different sources is mapped to a single data model, which means that all the source elements are transformed and formatted to fit in this enterprise-wide structure that’s created in the data warehouse. The following figure shows a top-down EDW design approach where the warehouse is built first before smaller data marts are created for consumers:
Figure 00.5 – Top-down EDW design
Benefits of the top-down approach
- The data model is highly normalized, which reduces data redundancy
- Since it’s not tied to a specific LOB or use case, the data warehouse can evolve independently at an enterprise level
- It provides flexibility for any business requirement changes or data structure updates
- ETL pipelines are simpler to create and maintain
Shortcomings of the top-down approach
- A normalized data model increases the complexity of schema design
- A large number of joins on the normalized tables can make the system compute-intensive and expensive over time
- Additional logic is required to create a business-specific data consumption layer, which means additional ETL processes are needed to create data marts from the unified EDW
Challenges with on-premises data systems
The hardware that was used to process, store, and consume data had to be procured up-front, and then installed and configured before it was ready for use. So, there was operational overhead and risks associated with procuring the hardware, provisioning it, installing software, and maintaining the system all the time. Also, to accommodate for future data growth, people had to estimate additional capacity way in advance. The concept of hardware elasticity didn’t exist. The lack of elasticity in hardware meant that there were scalability risks associated with the systems in place, and these risks would surface whenever there was a sudden growth in the volume of data or when there was a market expansion for the business.
Buying all this extra hardware up-front also meant that a huge capital expenditure investment had to be made for the hardware, with all the extra capacity lying unused from time to time. Also, software licenses had to be paid for and those were expensive, adding to the overall IT costs. Even after buying all the hardware upfront, it was difficult to maintain the data platform’s high performance all the time. As data volumes grew, latency started creeping in, which adversely affected the performance of certain critical systems.
As data grew into big data, the type of data produced was not just structured data; a lot of business use cases required semi-structured data, such as JSON files, and even unstructured data, such as images and PDF files. In subsequent chapters, we will go through some use cases that specify different types of data.
As the sources of data grew, so did the number of ETL pipelines. Managing these pipelines became cumbersome. And on top of that, with so much data movement, data started to duplicate at multiple places, which made it difficult to create a single source of truth for the data.
On the flip side, with so many data sources and data owners within an organization, data became siloed, which made it difficult to share across different LOBs in the organization.
Most of the enterprise data was either stored in an OLTP system such as an RDBMS or an OLAP system such as a data warehouse. What this meant was that organizations tried to solve most of their new use cases using the systems they had invested so heavily in. The challenge was that these systems were built and optimized for specific types of operations only. Soon, it became evident that to solve other types of data and analytics use cases, specific types of systems were needed to be in place, to meet the performance requirements.
Lastly, as businesses started to expand in other geographies, these systems needed to be expanded to other locations. And a lot of time, effort, and money was spent scaling the data platform and making it resilient in case of failures.
What this book is all about
Before we wrap up this prologue and dive into more details in subsequent chapters, I want to lay the foundation for what you should expect from this book and how the content is laid out.
When you think of a data platform in an organization, it contains a lot of systems that work in tandem to make the platform operational. A data platform contains different types of purpose-built data stores, different types of ETL tools and pipelines for data movement between the data stores, different types of systems that allow end users to consume the data, and different types of security and governance mechanisms in place to keep the platform protected and safe.
To allow the data platform to cater to different types of use cases, it needs to be designed and architected in the best possible manner. With exponential data growth and the need to solve new business use cases, these architectural patterns need to constantly evolve, not just for current needs but also for future ones. Every organization is looking to move to the public cloud as quickly as they can to make their data platforms scalable, agile, performant, cost-effective, and secure.
Amazon Web Services (AWS) provides the broadest and deepest set of data, analytics, and AI/ML services. Organizations can use AWS services to help them derive insights from their data. This book will walk you through how to architect and design your data platform, for specific business use cases, using different AWS services.
In Chapter 1, we will understand what a modern data architecture on AWS looks like, and we will also look at what the pillars of this architecture are. The remainder of this book is organized around those pillars. We will start with a typical data and analytics use case and build on top of it as new use cases come along. By doing this, you will see the progressive build-up of the data platform for a variety of use cases.
One thing to note is that this book won’t have a lot of hands-on coding or other implementation exercises. The idea here is to provide architecture patterns and how multiple AWS services, along with their specific features, help solve a particular problem. However, at the end of each chapter, I will provide links to hands-on workshops, where you can follow step-by-step instructions to build the components of a modern data platform in your AWS account.
Finally, due to limited space in this book, not every use case for each of the components of the modern data platform can be covered. The idea here is to give you a simple but holistic view of what possible use cases might look like and how you can leverage some key features of many of the AWS services to get toward a working solution. A solution can be achieved in many possible ways, and every solution has pros and cons that are very specific to the implementation. Technology evolves fast and so do many of the AWS services; always do your due diligence and look out for better ways to solve problems.
With that, this short introduction has come to an end. The idea here was to provide a quick history of how data and analytics evolved. We went through the different types of data warehouse designs, along with their pros and cons. We also looked at how the recent exponential growth of data has made it difficult to use the same type of system architecture for all types of use cases.
This gives us a perfect launching pad to understand what modern data architecture is and how it can be architected using different AWS data and analytics services.