Reader small image

You're reading from  Apache Spark for Data Science Cookbook

Product typeBook
Published inDec 2016
Publisher
ISBN-139781785880100
Edition1st Edition
Concepts
Right arrow
Author (1)
Padma Priya Chitturi
Padma Priya Chitturi
author image
Padma Priya Chitturi

Padma Priya Chitturi is Analytics Lead at Fractal Analytics Pvt Ltd and has over five years of experience in Big Data processing. Currently, she is part of capability development at Fractal and responsible for solution development for analytical problems across multiple business domains at large scale. Prior to this, she worked for an Airlines product on a real-time processing platform serving one million user requests/sec at Amadeus Software Labs. She has worked on realizing large-scale deep networks (Jeffrey deans work in Google brain) for image classification on the big data platform Spark. She works closely with Big Data technologies such as Spark, Storm, Cassandra and Hadoop. She was an open source contributor to Apache Storm.
Read more about Padma Priya Chitturi

Right arrow

Concatenating and merging operations over DataFrames


This recipe shows how to concatenate, merge/join, and perform complex operations over Pandas DataFrames as well as Spark DataFrames.

Getting ready

To step through this recipe, you will need a running Spark cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos. Also, have Python and IPython installed on the Linux machine, that is, Ubuntu 14.04.

How to do it…

  1. Invoke ipython console -profile=pyspark:

          In [1]: from pyspark import SparkConf, SparkContext, SQLContext 
          In [2]: import pandas as pd 
          In [3]: sqlcontext = SQLContext(sc) 
          In [4]: pdf1 = pd.DataFrame({'A':['A0','A1','A2','A3'], 'B':   
          ['B0','B1','B2','B3'], 'C':['C0','C1','C2','C3'],'D': 
          ['D0','D1','D2','D3']},index=[0,1,2,3]) 
          In [5]: pdf2 = pd.DataFrame({'A':['A4','A5','A6','A7'], 'B':
          ['B4','B5','B6','B7'], 'C':['C4','C5','C6','C7'],'D':    ...
lock icon
The rest of the page is locked
Previous PageNext Page
You have been reading a chapter from
Apache Spark for Data Science Cookbook
Published in: Dec 2016Publisher: ISBN-13: 9781785880100

Author (1)

author image
Padma Priya Chitturi

Padma Priya Chitturi is Analytics Lead at Fractal Analytics Pvt Ltd and has over five years of experience in Big Data processing. Currently, she is part of capability development at Fractal and responsible for solution development for analytical problems across multiple business domains at large scale. Prior to this, she worked for an Airlines product on a real-time processing platform serving one million user requests/sec at Amadeus Software Labs. She has worked on realizing large-scale deep networks (Jeffrey deans work in Google brain) for image classification on the big data platform Spark. She works closely with Big Data technologies such as Spark, Storm, Cassandra and Hadoop. She was an open source contributor to Apache Storm.
Read more about Padma Priya Chitturi