You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by RK Aduri <rk...@collectivei.com> on 2016/09/14 18:02:50 UTC

Best Practices for Spark-Python App Deployment

Dear All:

	We are trying to deploy ( using Jenkins ) a spark-python app on an edge node, however the dilemma is whether to clone the git repo to all the nodes in the cluster. The reason is, if we choose to use the deployment mode as cluster and master as yarn, then driver expects the current file structure where ever it runs in the cluster. Therefore, I am just wondering if there are any best practices to do it or just plain way of cloning the git repos all over the nodes in cluster. Any suggestion is welcome.

Thanks,
RK
-- 
Collective[i] dramatically improves sales and marketing performance using 
technology, applications and a revolutionary network designed to provide 
next generation analytics and decision-support directly to business users. 
Our goal is to maximize human potential and minimize mistakes. In most 
cases, the results are astounding. We cannot, however, stop emails from 
sometimes being sent to the wrong person. If you are not the intended 
recipient, please notify us by replying to this email's sender and deleting 
it (and any attachments) permanently from your system. If you are, please 
respect the confidentiality of this communication's contents.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscribe@spark.apache.org