You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@beam.apache.org by "ASF GitHub Bot (JIRA)" <ji...@apache.org> on 2019/03/20 19:46:00 UTC

[jira] [Work logged] (BEAM-6812) Convert keys to ByteArray in Combine.perKey for Spark

     [ https://issues.apache.org/jira/browse/BEAM-6812?focusedWorklogId=216429&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-216429 ]

ASF GitHub Bot logged work on BEAM-6812:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 20/Mar/19 19:45
            Start Date: 20/Mar/19 19:45
    Worklog Time Spent: 10m 
      Work Description: dmvk commented on pull request #8042: [BEAM-6812]: Convert keys to ByteArray in Combine.perKey to make sure hashCode is consistent
URL: https://github.com/apache/beam/pull/8042#discussion_r267514847
 
 

 ##########
 File path: runners/spark/src/main/java/org/apache/beam/runners/spark/translation/TransformTranslator.java
 ##########
 @@ -569,8 +569,8 @@ private static Partitioner getPartitioner(EvaluationContext context) {
     Long bundleSize =
         context.getSerializableOptions().get().as(SparkPipelineOptions.class).getBundleSize();
     return (bundleSize > 0)
-        ? null
-        : new HashPartitioner(context.getSparkContext().defaultParallelism());
+        ? new HashPartitioner(context.getSparkContext().defaultParallelism())
+        : null;
 
 Review comment:
   Second thoughts, it was correct.
   
   https://github.com/apache/beam/pull/6884/files#r246077919
   
   This was in order to maintain old functionality (bundleSize == 0, which basically means to use predefined parallelism).
   
   I think the old functionality doesn't make much sense as it doesn't scale with input data. I guess someone may use this in order to re-scale "downstream" stage, but there should be a better mechanism to do this.
   
   Any thoughts? @timrobertson100 @kyle-winkelman 
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 216429)
    Time Spent: 2h 10m  (was: 2h)

> Convert keys to ByteArray in Combine.perKey for Spark
> -----------------------------------------------------
>
>                 Key: BEAM-6812
>                 URL: https://issues.apache.org/jira/browse/BEAM-6812
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-spark
>            Reporter: Ankit Jhalaria
>            Assignee: Ankit Jhalaria
>            Priority: Critical
>          Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> * During calls to Combine.perKey, we want they keys used to have consistent hashCode when invoked from different JVM's.
>  * However, while testing this in our company we found out that when using protobuf as keys during combine, the hashCodes can be different for the same key when invoked from different JVMs. This results in duplicates. 
>  * `ByteArray` class in Spark has a stable has code when dealing with arrays as well. 
>  * GroupByKey correctly converts keys to `ByteArray` and uses coders for serialization.
>  * The fix does something similar when dealing with combines.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)