You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Tanveer Ahmad - EWI <T....@tudelft.nl> on 2020/06/16 14:01:13 UTC

Spark dataframe creation through already distributed in-memory data sets

Hi all,


I am new to the Spark community. Please ignore if this question doesn't make sense.


My PySpark Dataframe is just taking a fraction of time (in ms) in 'Sorting', but moving data is much expensive (> 14 sec).


Explanation:

I have a huge Arrow RecordBatches collection which is equally distributed on all of my worker node's memories (in plasma_store). Currently, I am collecting all those RecordBatches back in my master node, merging them, and converting them to a single Spark Dataframe. Then I apply sorting function on that dataframe.


Spark dataframe is a cluster distributed data collection.


So my question is:

Is it possible to create a Spark dataframe from all that already distributed Arrow RecordBatches data collections in the worker's nodes memories? So that the data should remain in the respective worker's nodes memories (instead of bringing it to master node, merging, and then creating distributed dataframe).


Thanks!


Regards,
Tanveer Ahmad