You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kudu.apache.org by "Adar Dembo (JIRA)" <ji...@apache.org> on 2016/09/02 18:52:20 UTC
[jira] [Resolved] (KUDU-1581) Kudu-Spark read failure when the Kudu
table contains BINARY column
[ https://issues.apache.org/jira/browse/KUDU-1581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Adar Dembo resolved KUDU-1581.
------------------------------
Resolution: Fixed
Fix Version/s: 1.0.0
Fixed in commit efb6024. Thanks, Ram!
> Kudu-Spark read failure when the Kudu table contains BINARY column
> ------------------------------------------------------------------
>
> Key: KUDU-1581
> URL: https://issues.apache.org/jira/browse/KUDU-1581
> Project: Kudu
> Issue Type: Bug
> Components: client
> Affects Versions: 0.10.0
> Reporter: Ram Mettu
> Assignee: Ram Mettu
> Fix For: 1.0.0
>
>
> Using kudu-spark, create a Spark dataframe for a Kudu table containing BINARY column, any action fails to serialize.
> Steps to reproduce:
> 1. Create kudu table with binary column(s)
> 2. Populate table with data
> 3. Create Spark Dataframe and perform an action
> val data = sqlContext.read.options(Map("kudu.master" -> masterAddress, "kudu.table" -> "test")).kudu
> data.show()
> Results in an error
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0 in stage 1.0 (TID 1) had a not serializable result: java.nio.HeapByteBuffer
> Serialization stack:
> - object not serializable (class: java.nio.HeapByteBuffer, value: java.nio.HeapByteBuffer[pos=677 lim=682 cap=727])
> - element of array (index: 8)
> - array (class [Ljava.lang.Object;, size 9)
> - field (class: org.apache.spark.sql.catalyst.expressions.GenericInternalRow, name: values, type: class [Ljava.lang.Object;)
> - object (class org.apache.spark.sql.catalyst.expressions.GenericInternalRow, [0,0,0,0.0,0,false,0,0.0,java.nio.HeapByteBuffer[pos=677 lim=682 cap=727]])
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)