You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "Apache Spark (JIRA)" <ji...@apache.org> on 2015/05/18 01:28:59 UTC

[jira] [Commented] (SPARK-7691) Use type-specific row accessor functions in CatalystTypeConverters' toScala functions

    [ https://issues.apache.org/jira/browse/SPARK-7691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14547383#comment-14547383 ] 

Apache Spark commented on SPARK-7691:
-------------------------------------

User 'JoshRosen' has created a pull request for this issue:
https://github.com/apache/spark/pull/6222

> Use type-specific row accessor functions in CatalystTypeConverters' toScala functions
> -------------------------------------------------------------------------------------
>
>                 Key: SPARK-7691
>                 URL: https://issues.apache.org/jira/browse/SPARK-7691
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 1.4.0
>            Reporter: Josh Rosen
>            Assignee: Josh Rosen
>
> CatalystTypeConverters's Catalyst row to Scala row converters access columns' values via the generic {{Row.get()}} call rather than using type-specific accessor methods.  If we refactor the internal converter interfaces slightly, we can pass the row and column number into the converter function and allow it to do its own type-specific field extraction, similar to what we do in UnsafeRowConverter.  This is a blocker for being able to unit test new operators that I'm developing as part of Project Tungsten, since those operators may output {{UnsafeRow}} instances which don't support the generic {{get()}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org