You are viewing a plain text version of this content. The canonical link for it is here.
Posted to reviews@spark.apache.org by GitBox <gi...@apache.org> on 2022/09/15 07:40:36 UTC

[GitHub] [spark] gaborgsomogyi commented on pull request #37558: [SPARK-38954][CORE] Implement sharing of cloud credentials among driver and executors

gaborgsomogyi commented on PR #37558:
URL: https://github.com/apache/spark/pull/37558#issuecomment-1247704457

   I've taken a look at the PR from high level perspective and initially have a single question.
   Why building a new universe is needed w/ 1k lines of changes instead of using UGI as token container like we did for Kafka?
   As a first glance I would solve this problem by:
   * Adding a new provider which stores the token inside UGI
   * The added tokens are going to be transferred to the executors w/o any additional code
   * On executor side the token is available, just need to be read when authentication is needed
   
   The only important thing is when one puts the token into the UGI on driver side and gets it for authentication on executor side then the format must match.
   Maybe S3 token is so special that the shown code is needed but at the moment I don't see it why.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org