You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by tdas <gi...@git.apache.org> on 2018/04/02 20:50:24 UTC

[GitHub] spark pull request #20958: [SPARK-23844][SS] Fix socket source honors recove...

Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20958#discussion_r178646725
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/streaming/DataStreamWriter.scala ---
    @@ -238,6 +238,10 @@ final class DataStreamWriter[T] private[sql](ds: Dataset[T]) {
             "write files of Hive data source directly.")
         }
     
    +    val isSocketExists = df.queryExecution.analyzed.collect {
    --- End diff --
    
    I see what you are trying to do. But, honestly, we should NOT add any more special cases for specific sources. We already have memory and foreach, because it is hard to get rid of those. We should not add more.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org