You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "Ankit Jhalaria (JIRA)" <ji...@apache.org> on 2019/03/12 21:26:00 UTC

[jira] [Commented] (BEAM-6812) HashPartitioning on Spark with arrays as keys produces unpredictable results

    [ https://issues.apache.org/jira/browse/BEAM-6812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16791026#comment-16791026 ] 

Ankit Jhalaria commented on BEAM-6812:
--------------------------------------

* When running a Combine.perKey transform on Spark, we noticed duplicate results in the output for the same key.
 * On further investigation, it turns out that `StreamingTransformTranslator` uses `Spark's` HashPartitioner which does not work correctly when attempting to partition on a key which is an array.
 * Note from the code: 
 * /**
 * A [[org.apache.spark.Partitioner]] that implements hash-based partitioning using
 * Java's `Object.hashCode`.
 *
 * Java arrays have hashCodes that are based on the arrays' identities rather than their contents,
 * so attempting to partition an RDD[Array[_]] or RDD[(Array[_], _)] using a HashPartitioner will
 * produce an unexpected or incorrect result.
 */

> HashPartitioning on Spark with arrays as keys produces unpredictable results
> ----------------------------------------------------------------------------
>
>                 Key: BEAM-6812
>                 URL: https://issues.apache.org/jira/browse/BEAM-6812
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-spark
>            Reporter: Ankit Jhalaria
>            Assignee: Ankit Jhalaria
>            Priority: Critical
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)