You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by zsxwing <gi...@git.apache.org> on 2018/06/12 23:15:37 UTC

[GitHub] spark pull request #21428: [SPARK-24235][SS] Implement continuous shuffle wr...

Github user zsxwing commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21428#discussion_r194912456
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/continuous/shuffle/ContinuousShuffleReadRDD.scala ---
    @@ -34,8 +34,10 @@ case class ContinuousShuffleReadPartition(
       // Initialized only on the executor, and only once even as we call compute() multiple times.
       lazy val (reader: ContinuousShuffleReader, endpoint) = {
         val env = SparkEnv.get.rpcEnv
    -    val receiver = new UnsafeRowReceiver(queueSize, numShuffleWriters, epochIntervalMs, env)
    -    val endpoint = env.setupEndpoint(s"UnsafeRowReceiver-${UUID.randomUUID()}", receiver)
    +    val receiver = new RPCContinuousShuffleReader(
    +      queueSize, numShuffleWriters, epochIntervalMs, env)
    +    val endpoint = env.setupEndpoint(s"RPCContinuousShuffleReader-${UUID.randomUUID()}", receiver)
    --- End diff --
    
    Is it possible to get the query run id here? It would be helpful to debug if the endpoint name contains the query run id and partition id.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org