You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@spark.apache.org by Mina Aslani <as...@gmail.com> on 2018/11/28 18:44:10 UTC
Do we need to kill a spark job every time we change and deploy it?
Hi,
I have a question for you.
Do we need to kill a spark job every time we change and deploy it to
cluster? Or, is there a way for Spark to automatically pick up the recent
jar version?
Best regards,
Mina
回复:Do we need to kill a spark job every time we change and deploy it?
Posted by 965 <10...@qq.com>.
I think if your job is running and you want to deploy a new jar which is the new version for the other, spark will think the new jar is another job ,
they distinguish job by Job ID , so if you want to replace the jar ,you have to kil job every time;
------------------ 原始邮件 ------------------
发件人: "Mina Aslani"<as...@gmail.com>;
发送时间: 2018年11月29日(星期四)凌晨2:44
收件人: "user @spark"<us...@spark.apache.org>;
主题: Do we need to kill a spark job every time we change and deploy it?
Hi,
I have a question for you.
Do we need to kill a spark job every time we change and deploy it to cluster? Or, is there a way for Spark to automatically pick up the recent jar version?
Best regards,
Mina
Re: Do we need to kill a spark job every time we change and deploy it?
Posted by Irving Duran <ir...@gmail.com>.
Are you referring to have spark picking up a new jar build? If so, you can
probably script that on bash.
Thank You,
Irving Duran
On Wed, Nov 28, 2018 at 12:44 PM Mina Aslani <as...@gmail.com> wrote:
> Hi,
>
> I have a question for you.
> Do we need to kill a spark job every time we change and deploy it to
> cluster? Or, is there a way for Spark to automatically pick up the recent
> jar version?
>
> Best regards,
> Mina
>