You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Reynold Xin (JIRA)" <ji...@apache.org> on 2016/01/04 23:41:39 UTC
[jira] [Created] (SPARK-12635) More efficient (column batch)
serialization for Python/R
Reynold Xin created SPARK-12635:
-----------------------------------
Summary: More efficient (column batch) serialization for Python/R
Key: SPARK-12635
URL: https://issues.apache.org/jira/browse/SPARK-12635
Project: Spark
Issue Type: New Feature
Components: PySpark, SparkR, SQL
Reporter: Reynold Xin
Serialization between Scala / Python / R is pretty slow. Python and R both work pretty well with column batch interface (e.g. numpy arrays). Technically we should be able to just pass column batches around with minimal serialization (maybe even zero copy memory).
Note that this depends on some internal refactoring to use a column batch interface in Spark SQL.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org