You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Luke Miner (JIRA)" <ji...@apache.org> on 2017/01/19 22:52:26 UTC
[jira] [Commented] (SPARK-14141) Let user specify datatypes of
pandas dataframe in toPandas()
[ https://issues.apache.org/jira/browse/SPARK-14141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15830750#comment-15830750 ]
Luke Miner commented on SPARK-14141:
------------------------------------
One option is to convert all the categorical variables into integers in spark and then once we have the pandas dataframe, re-encode them as categoricals. Here's a dummy example.
{code}
def to_pandas_categorical(df, col_names):
def encode_string_with_int(df, col_names):
column_codes = {}
for col_name in col_names:
# get encodings for later use by pandas
unique_values = (df
.select(col_name)
.distinct()
.rdd
.map(lambda r: r[0])
.collect())
encodings = map(str, range(len(unique_values)))
column_codes[col_name] = unique_values
# now encode df column
df = df.replace(unique_values, encodings, col_name)
df = df.withColumn(col_name, F.col(col_name).cast('integer'))
return df, column_codes
df_encoded, column_codes = encode_string_with_int(df, col_names)
df_encoded_pdf = df_encoded.toPandas()
for col_name in col_names:
df_encoded_pdf[col_name] = pd.Categorical.from_codes(df_encoded_pdf[col_name],
column_codes[col_name])
{code}
> Let user specify datatypes of pandas dataframe in toPandas()
> ------------------------------------------------------------
>
> Key: SPARK-14141
> URL: https://issues.apache.org/jira/browse/SPARK-14141
> Project: Spark
> Issue Type: New Feature
> Components: Input/Output, PySpark, SQL
> Reporter: Luke Miner
> Priority: Minor
>
> Would be nice to specify the dtypes of the pandas dataframe during the toPandas() call. Something like:
> bq. pdf = df.toPandas(dtypes={'a': 'float64', 'b': 'datetime64', 'c': 'bool', 'd': 'category'})
> Since dtypes like `category` are more memory efficient, you could potentially load many more rows into a pandas dataframe with this option without running out of memory.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org