You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by "pengzhon-db (via GitHub)" <gi...@apache.org> on 2023/05/12 00:56:13 UTC

[GitHub] [spark] pengzhon-db commented on a diff in pull request #41146: [SPARK-43474] [SS] [CONNECT] Add a spark connect function to create DataFrame reference

pengzhon-db commented on code in PR #41146:
URL: https://github.com/apache/spark/pull/41146#discussion_r1191808469


##########
connector/connect/common/src/main/protobuf/spark/connect/relations.proto:
##########
@@ -394,6 +395,18 @@ message CachedLocalRelation {
   string hash = 3;
 }
 
+// Represents a DataFrame that has been cached on server.
+message CachedDataFrame {
+  // (Required) An identifier of the user which cached the dataframe
+  string userId = 1;

Review Comment:
   We can also just get userId and sessionId from server via request, instead of passing from here.
   But that would require we update [transformRelation()](https://github.com/apache/spark/blob/master/connector/connect/server/src/main/scala/org/apache/spark/sql/connect/planner/SparkConnectPlanner.scala#L87) to take into two more parameters, which means all all those `transform...()` need to be updated to have two more parameter.  



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org