You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Hyukjin Kwon (JIRA)" <ji...@apache.org> on 2018/06/01 01:22:00 UTC
[jira] [Commented] (SPARK-21187) Complete support for remaining
Spark data types in Arrow Converters
[ https://issues.apache.org/jira/browse/SPARK-21187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16497448#comment-16497448 ]
Hyukjin Kwon commented on SPARK-21187:
--------------------------------------
(y)
> Complete support for remaining Spark data types in Arrow Converters
> -------------------------------------------------------------------
>
> Key: SPARK-21187
> URL: https://issues.apache.org/jira/browse/SPARK-21187
> Project: Spark
> Issue Type: Umbrella
> Components: PySpark, SQL
> Affects Versions: 2.3.0
> Reporter: Bryan Cutler
> Assignee: Bryan Cutler
> Priority: Major
>
> This is to track adding the remaining type support in Arrow Converters. Currently, only primitive data types are supported. '
> Remaining types:
> * -*Date*-
> * -*Timestamp*-
> * *Complex*: Struct, -Array-, Arrays of Date/Timestamps, Map
> * -*Decimal*-
> * *Binary* - in pyspark
> Some things to do before closing this out:
> * -Look to upgrading to Arrow 0.7 for better Decimal support (can now write values as BigDecimal)-
> * -Need to add some user docs-
> * -Make sure Python tests are thorough-
> * Check into complex type support mentioned in comments by [~leif], should we support mulit-indexing?
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org