Switch to the store?

Working with Big Data in Python [Video]

More Information
Learn
  • MongoDB as a non-relational database based on JSON documents
  • Set up cursors in pyMongo as a connector to a MongoDB database
  • Run more complex chaining and aggregation queries
  • Connect to MongoDB in pySpark
  • How to write MongoDB queries using operators and chain these together into aggregation pipelines
  • Real-world examples of using Python and MongoDB in a data pipeline
  • Using Mongo connectors in pySpark for high-performance processing
About

This course is a comprehensive, practical guide to using MongoDB and Spark in Python, learning how to store and make sense of huge data sets, and performing basic machine learning tasks to make predictions.

MongoDB is one of the most powerful non-relational database systems available offering robust scalability and expressive operations that, when combined with Python data analysis libraries and distributed computing, represent a valuable set of tools for the modern data scientist. NoSQL databases require a new way of thinking about data and scalable queries. Once Mongo queries have been mastered, it is necessary to understand how we can leverage this API in Python's rich analysis and visualization ecosystem. This course will cover how to use MongoDB, particularly if you are used to SQL databases, with a focus on scalability to large datasets. pyMongo is introduced as the means to interact with a MongoDB database from within Python code and the data structures used to do so are explored. MongoDB uniquely allows for complex operations and aggregations to be run within the query itself and we will cover how to use these operators. While MongoDB itself is built for easy scalability across many nodes as datasets grow, Python is not. Therefore, we cover how we can use Spark with MongoDB to handle more complex machine learning techniques for extremely large datasets. This learning will be applied to several examples of real-world datasets and analyses that can form the basis of your own pipelines, allowing you to quickly get up-and-running with a powerful data science toolkit.

Style and Approach

An exhaustive course that carefully covers the fundamental concepts of unstructured data and distributed programming before applying them to examples of typical data science workflows.

This course is divided into clear chunks, so you can learn at your own pace and focus on your own area of interest.

Features
  • A comprehensive introduction to the key concepts of MongoDB and Spark.
  • Intuitive examples of key operations in MongoDB and Spark using Python APIs
  • Two working examples of realistic end-to-end data analyses using real world datasets
Course Length 2 hours 41 minutes
ISBN 9781788839068
Date Of Publication 21 Feb 2018

Authors

Alexis Rutherford

Alex Rutherford is a Research Scientist at MIT Media Lab. He has a PhD in Physics and nearly 10 years of experience of using Python for data analysis and modeling gained at the United Nations, Facebook, and elsewhere. He has tackled many problems using data analysis including epidemiology, ethnic violence, vaccine hesitancy, and constitutional change and has built pipelines for social media data, legal documents, and news articles among others. He blogs and tweets regularly on data science and data privacy.