You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@spark.apache.org by "BahaaEddin AlAila (JIRA)" <ji...@apache.org> on 2017/02/17 06:45:42 UTC

[jira] [Created] (SPARK-19646) binaryRecords replicates records in scala API

BahaaEddin AlAila created SPARK-19646:
-----------------------------------------

             Summary: binaryRecords replicates records in scala API
                 Key: SPARK-19646
                 URL: https://issues.apache.org/jira/browse/SPARK-19646
             Project: Spark
          Issue Type: Bug
          Components: Spark Core
    Affects Versions: 2.1.0, 2.0.0
            Reporter: BahaaEddin AlAila
            Priority: Minor


The scala sc.binaryRecords replicates one record for the entire set.
for example, I am trying to load the cifar binary data where in a big binary file, each 3073 represents a 32x32x3 bytes image with 1 byte for the label label. The file resides on my local filesystem.
.take(5) returns 5 records all the same, .collect() returns 10,000 records all the same.
What is puzzling is that the pyspark one works perfectly even though underneath it is calling the scala implementation.
I have tested this on 2.1.0 and 2.0.0



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org