You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "vishal kumar yadav (JIRA)" <ji...@apache.org> on 2018/12/11 09:44:00 UTC

[jira] [Commented] (SPARK-20528) Add BinaryFileReader and Writer for DataFrames

    [ https://issues.apache.org/jira/browse/SPARK-20528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16716684#comment-16716684 ] 

vishal kumar yadav commented on SPARK-20528:
--------------------------------------------

I am facing similar kind of issues.  

val sc: SparkContext = new SparkContext(conf)
val a = sc.binaryFiles("path_for/binary_file").map \{ x => (x._1, x._2.toArray) }.map \{ x => (x._1, x._2.toArray) }

val sqlContext = new SQLContext(sc)

val binDataFrame = sqlContext.createDataFrame(a)

println(binDataFrame.show())

Error :

18/12/11 15:07:25 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 192.168.0.164, executor 0): java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD

 

 

> Add BinaryFileReader and Writer for DataFrames
> ----------------------------------------------
>
>                 Key: SPARK-20528
>                 URL: https://issues.apache.org/jira/browse/SPARK-20528
>             Project: Spark
>          Issue Type: New Feature
>          Components: SQL
>    Affects Versions: 2.2.0
>            Reporter: Joseph K. Bradley
>            Priority: Major
>
> It would be very useful to have a binary data reader/writer for DataFrames, presumably called via {{spark.read.binaryFiles}}, etc.
> Currently, going through RDDs is annoying since it requires different code paths for Scala vs Python:
> Scala:
> {code}
> val binaryFilesRDD = sc.binaryFiles("mypath")
> val binaryFilesDF = spark.createDataFrame(binaryFilesRDD)
> {code}
> Python:
> {code}
> binaryFilesRDD = sc.binaryFiles("mypath")
> binaryFilesRDD_recast = binaryFilesRDD.map(lambda x: (x[0], bytearray(x[1])))
> binaryFilesDF = spark.createDataFrame(binaryFilesRDD_recast)
> {code}
> This is because Scala and Python {{sc.binaryFiles}} return different types, which makes sense in RDD land but not DataFrame land.
> My motivation here is working with images in Spark.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org