You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2019/03/25 15:07:26 UTC

[GitHub] [spark] agrawalpooja commented on a change in pull request #24151: [SPARK-26739][SQL] Standardized Join Types for DataFrames

agrawalpooja commented on a change in pull request #24151: [SPARK-26739][SQL] Standardized Join Types for DataFrames
URL: https://github.com/apache/spark/pull/24151#discussion_r268688898
 
 

 ##########
 File path: mllib/src/main/scala/org/apache/spark/ml/recommendation/ALS.scala
 ##########
 @@ -44,6 +44,7 @@ import org.apache.spark.mllib.linalg.CholeskyDecomposition
 import org.apache.spark.mllib.optimization.NNLS
 import org.apache.spark.rdd.RDD
 import org.apache.spark.sql.{DataFrame, Dataset}
+import org.apache.spark.sql.catalyst.plans._
 
 Review comment:
   yep, initially I created an enum and was using that. But, later someone pointed out in JIRA that we already have a JoinType class which we can reuse here.
   Is it fine if I use a enum here?
   (The motive is to have a standardised join types and detect the invalid join types at compile time itself)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org