You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by frreiss <gi...@git.apache.org> on 2016/04/13 20:17:21 UTC

[GitHub] spark pull request: [SPARK-12224][SPARKR] R support for JDBC sourc...

Github user frreiss commented on a diff in the pull request:

    https://github.com/apache/spark/pull/10480#discussion_r59598252
  
    --- Diff: core/src/main/scala/org/apache/spark/api/r/SerDe.scala ---
    @@ -355,6 +355,13 @@ private[spark] object SerDe {
               writeInt(dos, v.length)
               v.foreach(elem => writeObject(dos, elem))
     
    +        // Handle Properties
    --- End diff --
    
    Personally I don't think that special-casing the Properties object here is a major problem -- java.util.Properties is a very commonly used class, and it would make sense for the RPC layer of SparkR to handle Properties alongside other common types like Map and String. But it makes sense to defer to Shivaram on this point. I would vote for option (2) above.
    
    Note that, as far as I can see, the code here to pass a Properties object back to R is only triggered by the test cases in this PR. The actual code for invoking `read.jdbc()` only _writes_ to Properties objects.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org