Reader small image

You're reading from  Extending Power BI with Python and R - Second Edition

Product typeBook
Published inMar 2024
Reading LevelIntermediate
PublisherPackt
ISBN-139781837639533
Edition2nd Edition
Languages
Right arrow
Author (1)
Luca Zavarella
Luca Zavarella
author image
Luca Zavarella

Luca Zavarella has a rich background as an Azure Data Scientist Associate and Microsoft MVP, with a Computer Engineering degree from the University of L'Aquila. His decade-plus experience spans the Microsoft Data Platform, starting as a T-SQL developer on SQL Server 2000 and 2005, then mastering the full suite of Microsoft Business Intelligence tools (SSIS, SSAS, SSRS), and advancing into data warehousing. Recently, his focus has shifted to advanced analytics, data science, and AI, contributing to the community as a speaker and blogger, especially on Medium. Currently, he leads the Data & AI division at iCubed, and he also holds an honors degree in classical piano from the "Alfredo Casella" Conservatory in L'Aquila.
Read more about Luca Zavarella

Right arrow

Boosting Data Loading Speed in Power BI with Parquet Format

Often, the analyses you are asked to perform in Power BI need to be based on fairly large data sources. In the previous chapter, you learned how to work with data that is larger than the RAM available on your machine, and you saw that query execution times are in the order of minutes.

Now consider a report that needs to perform several calculations on the same corpus database. Obviously, the performance of the overall execution time of the report is strongly related to the expected number of operations to be performed on the database. Therefore, being able to reduce the execution time of individual queries will allow you to have a report whose dataset is updated as quickly as possible with each refresh.

Queries implemented in a Power BI report most often need to retrieve aggregated measures that provide users with the information they need to take specific business actions. If the data is available in relational databases...

Technical requirements

This chapter requires you to have a working internet connection and Power BI Desktop already installed on your machine (we used version 2.114.664.0 64-bit, February 2022). You must have properly configured the R and Python engines and IDEs as outlined in Chapter 2, Configuring R with Power BI, and Chapter 3, Configuring Python with Power BI.

Join our book community on Discord

https://packt.link/EarlyAccessCommunity

Qr code Description automatically generated

In the previous chapter, you learned how to read from and write to a CSV file using both Python and R. When it comes to reading a file, whether you use Power BI's standard data import feature or the techniques shown in the previous chapter, the main limitation on the file size is the amount of RAM available on the machine where Power BI Desktop is installed.In a data enrichment phase, it may be necessary to extract information needed for ongoing analysis from very large files (terabytes in size). In these cases, it is almost always necessary to implement big data solutions to be able to handle such large amounts of data. Very often, however, it is necessary to import files that are slightly larger than the available memory in order to extract aggregate information and then persist it in a small table for reuse during processing. In such cases, you don't need to worry about sophisticated big data platforms...

Technical requirements

This chapter requires you to have a working internet connection and Power BI Desktop already installed on your machine (we used the version 2.114.664.0 64-bit, February 2022). You must have properly configured the R and Python engines and IDEs as outlined in Chapter 2, Configuring R with Power BI, and Chapter 3, Configuring Python with Power BI.

A typical analytic scenario using large datasets

One of the most common activities of a data scientist is to analyze a dataset of information relevant to a business scenario. The goal of the analysis is to be able to identify associations and relationships between variables that will somehow help to discover new measurable aspects of the business (insights) that can then be used to make it grow better. It may be the case that the available data is not sufficient to identify strong associations between variables, because any additional variables may not be considered. In this case, attempting to obtain new data that is not generated by your organization but that enriches the context of your dataset (a data augmentation process) can improve the strength of the statistical associations between your variables. For example, being able to link weather forecast data to a dataset that reports the measurements of a dam's water level will certainly introduce significant variables to better...

Importing large datasets with Python

In Chapter 3, Configuring Python with Power BI, we suggested that you install some of the most commonly used data management packages in your environment, including NumPy, pandas, and scikit-learn. The biggest limitation of these packages is that they cannot handle datasets larger than the RAM of the machine on which they are used, so they cannot scale to more than one machine. To overcome this limitation, distributed systems based on Spark, which has become a dominant tool in the big data analytics landscape, are often used. However, moving to these systems forces developers to rethink code they have already written using an API called PySpark, which was created to use Spark objects with Python. This process is generally seen as causing delays in project delivery and causing frustration for developers who are much more comfortable with the libraries available for standard Python.In response to the above issues, the community has developed a new library...

Importing large datasets with R

The same scalability limitations illustrated for Python packages used to manipulate data also exist for R packages in the Tidyverse ecosystem. Even in R, it is not possible to use a dataset larger than the available RAM on the machine. The first solution that is adopted in these cases is also to switch to Spark-based distributed systems that provide the SparkR language. It provides a distributed implementation of the DataFrame you are used to in R, supporting filtering, aggregation, and selection operations as you do with the dplyr package. For those of us who are fans of the Tidyverse world, RStudio is actively developing the sparklyr package, which allows you to use all the functionality of dplyr, even for distributed DataFrames. However, using Spark-based systems to process CSVs that together take up little more than the RAM you have available on your machine may be overkill due to the overhead of all the Java infrastructure needed to run them.In the...

lock icon
The rest of the chapter is locked
You have been reading a chapter from
Extending Power BI with Python and R - Second Edition
Published in: Mar 2024Publisher: PacktISBN-13: 9781837639533
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
undefined
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $15.99/month. Cancel anytime

Author (1)

author image
Luca Zavarella

Luca Zavarella has a rich background as an Azure Data Scientist Associate and Microsoft MVP, with a Computer Engineering degree from the University of L'Aquila. His decade-plus experience spans the Microsoft Data Platform, starting as a T-SQL developer on SQL Server 2000 and 2005, then mastering the full suite of Microsoft Business Intelligence tools (SSIS, SSAS, SSRS), and advancing into data warehousing. Recently, his focus has shifted to advanced analytics, data science, and AI, contributing to the community as a speaker and blogger, especially on Medium. Currently, he leads the Data & AI division at iCubed, and he also holds an honors degree in classical piano from the "Alfredo Casella" Conservatory in L'Aquila.
Read more about Luca Zavarella