You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by Jun Ping Du <jd...@vmware.com> on 2013/10/24 07:06:54 UTC

Re: dynamically resizing Hadoop cluster on AWS?

Move to @user alias.

----- Original Message -----
From: "Jun Ping Du" <jd...@vmware.com>
To: general@hadoop.apache.org
Sent: Wednesday, October 23, 2013 10:03:27 PM
Subject: Re: dynamically resizing Hadoop cluster on AWS?

If only compute node (TaskTracker or NodeManager) in your instance, then decommission nodes and shutdown related EC2 instances should be fine although some finished/running tasks might need to be re-run automatically. If future, we would support gracefully decommission (tracked by YARN-914 and MAPREDUCE-5381) so that no tasks need to be rerun in this case (but need to wait a while).

Thanks,

Junping

----- Original Message -----
From: "Nan Zhu" <zh...@gmail.com>
To: general@hadoop.apache.org
Sent: Wednesday, October 23, 2013 8:15:51 PM
Subject: Re: dynamically resizing Hadoop cluster on AWS?

Oh, I’m not running HDFS in the instances, I use S3 to save data

--  
Nan Zhu
School of Computer Science,
McGill University



On Wednesday, October 23, 2013 at 11:11 PM, Nan Zhu wrote:

> Hi, all  
>  
> I’m running a Hadoop cluster on AWS EC2,  
>  
> I would like to dynamically resizing the cluster so as to reduce the cost, is there any solution to achieve this?  
>  
> E.g. I would like to cut the cluster size with a half, is it safe to just shutdown the instances (if some tasks are just running on them, can I rely on the speculative execution to re-run them on other nodes?)
>  
> I cannot use EMR, since I’m running a customized version of Hadoop  
>  
> Best,  
>  
> --  
> Nan Zhu
> School of Computer Science,
> McGill University
>  

Re: dynamically resizing Hadoop cluster on AWS?

Posted by Nan Zhu <zh...@gmail.com>.
Thank you very much for replying and sorry for posting on the wrong list  

Best,  

--  
Nan Zhu
School of Computer Science,
McGill University



On Thursday, October 24, 2013 at 1:06 AM, Jun Ping Du wrote:

> Move to @user alias.
>  
> ----- Original Message -----
> From: "Jun Ping Du" <jdu@vmware.com (mailto:jdu@vmware.com)>
> To: general@hadoop.apache.org (mailto:general@hadoop.apache.org)
> Sent: Wednesday, October 23, 2013 10:03:27 PM
> Subject: Re: dynamically resizing Hadoop cluster on AWS?
>  
> If only compute node (TaskTracker or NodeManager) in your instance, then decommission nodes and shutdown related EC2 instances should be fine although some finished/running tasks might need to be re-run automatically. If future, we would support gracefully decommission (tracked by YARN-914 and MAPREDUCE-5381) so that no tasks need to be rerun in this case (but need to wait a while).
>  
> Thanks,
>  
> Junping
>  
> ----- Original Message -----
> From: "Nan Zhu" <zhunansjtu@gmail.com (mailto:zhunansjtu@gmail.com)>
> To: general@hadoop.apache.org (mailto:general@hadoop.apache.org)
> Sent: Wednesday, October 23, 2013 8:15:51 PM
> Subject: Re: dynamically resizing Hadoop cluster on AWS?
>  
> Oh, I’m not running HDFS in the instances, I use S3 to save data
>  
> --  
> Nan Zhu
> School of Computer Science,
> McGill University
>  
>  
>  
> On Wednesday, October 23, 2013 at 11:11 PM, Nan Zhu wrote:
>  
> > Hi, all  
> >  
> > I’m running a Hadoop cluster on AWS EC2,  
> >  
> > I would like to dynamically resizing the cluster so as to reduce the cost, is there any solution to achieve this?  
> >  
> > E.g. I would like to cut the cluster size with a half, is it safe to just shutdown the instances (if some tasks are just running on them, can I rely on the speculative execution to re-run them on other nodes?)
> >  
> > I cannot use EMR, since I’m running a customized version of Hadoop  
> >  
> > Best,  
> >  
> > --  
> > Nan Zhu
> > School of Computer Science,
> > McGill University
> >  
> >  
>  
>  
>  



Re: dynamically resizing Hadoop cluster on AWS?

Posted by Nan Zhu <zh...@gmail.com>.
Thank you very much for replying and sorry for posting on the wrong list  

Best,  

--  
Nan Zhu
School of Computer Science,
McGill University



On Thursday, October 24, 2013 at 1:06 AM, Jun Ping Du wrote:

> Move to @user alias.
>  
> ----- Original Message -----
> From: "Jun Ping Du" <jdu@vmware.com (mailto:jdu@vmware.com)>
> To: general@hadoop.apache.org (mailto:general@hadoop.apache.org)
> Sent: Wednesday, October 23, 2013 10:03:27 PM
> Subject: Re: dynamically resizing Hadoop cluster on AWS?
>  
> If only compute node (TaskTracker or NodeManager) in your instance, then decommission nodes and shutdown related EC2 instances should be fine although some finished/running tasks might need to be re-run automatically. If future, we would support gracefully decommission (tracked by YARN-914 and MAPREDUCE-5381) so that no tasks need to be rerun in this case (but need to wait a while).
>  
> Thanks,
>  
> Junping
>  
> ----- Original Message -----
> From: "Nan Zhu" <zhunansjtu@gmail.com (mailto:zhunansjtu@gmail.com)>
> To: general@hadoop.apache.org (mailto:general@hadoop.apache.org)
> Sent: Wednesday, October 23, 2013 8:15:51 PM
> Subject: Re: dynamically resizing Hadoop cluster on AWS?
>  
> Oh, I’m not running HDFS in the instances, I use S3 to save data
>  
> --  
> Nan Zhu
> School of Computer Science,
> McGill University
>  
>  
>  
> On Wednesday, October 23, 2013 at 11:11 PM, Nan Zhu wrote:
>  
> > Hi, all  
> >  
> > I’m running a Hadoop cluster on AWS EC2,  
> >  
> > I would like to dynamically resizing the cluster so as to reduce the cost, is there any solution to achieve this?  
> >  
> > E.g. I would like to cut the cluster size with a half, is it safe to just shutdown the instances (if some tasks are just running on them, can I rely on the speculative execution to re-run them on other nodes?)
> >  
> > I cannot use EMR, since I’m running a customized version of Hadoop  
> >  
> > Best,  
> >  
> > --  
> > Nan Zhu
> > School of Computer Science,
> > McGill University
> >  
> >  
>  
>  
>  



Re: dynamically resizing Hadoop cluster on AWS?

Posted by Nan Zhu <zh...@gmail.com>.
Thank you very much for replying and sorry for posting on the wrong list  

Best,  

--  
Nan Zhu
School of Computer Science,
McGill University



On Thursday, October 24, 2013 at 1:06 AM, Jun Ping Du wrote:

> Move to @user alias.
>  
> ----- Original Message -----
> From: "Jun Ping Du" <jdu@vmware.com (mailto:jdu@vmware.com)>
> To: general@hadoop.apache.org (mailto:general@hadoop.apache.org)
> Sent: Wednesday, October 23, 2013 10:03:27 PM
> Subject: Re: dynamically resizing Hadoop cluster on AWS?
>  
> If only compute node (TaskTracker or NodeManager) in your instance, then decommission nodes and shutdown related EC2 instances should be fine although some finished/running tasks might need to be re-run automatically. If future, we would support gracefully decommission (tracked by YARN-914 and MAPREDUCE-5381) so that no tasks need to be rerun in this case (but need to wait a while).
>  
> Thanks,
>  
> Junping
>  
> ----- Original Message -----
> From: "Nan Zhu" <zhunansjtu@gmail.com (mailto:zhunansjtu@gmail.com)>
> To: general@hadoop.apache.org (mailto:general@hadoop.apache.org)
> Sent: Wednesday, October 23, 2013 8:15:51 PM
> Subject: Re: dynamically resizing Hadoop cluster on AWS?
>  
> Oh, I’m not running HDFS in the instances, I use S3 to save data
>  
> --  
> Nan Zhu
> School of Computer Science,
> McGill University
>  
>  
>  
> On Wednesday, October 23, 2013 at 11:11 PM, Nan Zhu wrote:
>  
> > Hi, all  
> >  
> > I’m running a Hadoop cluster on AWS EC2,  
> >  
> > I would like to dynamically resizing the cluster so as to reduce the cost, is there any solution to achieve this?  
> >  
> > E.g. I would like to cut the cluster size with a half, is it safe to just shutdown the instances (if some tasks are just running on them, can I rely on the speculative execution to re-run them on other nodes?)
> >  
> > I cannot use EMR, since I’m running a customized version of Hadoop  
> >  
> > Best,  
> >  
> > --  
> > Nan Zhu
> > School of Computer Science,
> > McGill University
> >  
> >  
>  
>  
>  



Re: dynamically resizing Hadoop cluster on AWS?

Posted by Nan Zhu <zh...@gmail.com>.
Thank you very much for replying and sorry for posting on the wrong list  

Best,  

--  
Nan Zhu
School of Computer Science,
McGill University



On Thursday, October 24, 2013 at 1:06 AM, Jun Ping Du wrote:

> Move to @user alias.
>  
> ----- Original Message -----
> From: "Jun Ping Du" <jdu@vmware.com (mailto:jdu@vmware.com)>
> To: general@hadoop.apache.org (mailto:general@hadoop.apache.org)
> Sent: Wednesday, October 23, 2013 10:03:27 PM
> Subject: Re: dynamically resizing Hadoop cluster on AWS?
>  
> If only compute node (TaskTracker or NodeManager) in your instance, then decommission nodes and shutdown related EC2 instances should be fine although some finished/running tasks might need to be re-run automatically. If future, we would support gracefully decommission (tracked by YARN-914 and MAPREDUCE-5381) so that no tasks need to be rerun in this case (but need to wait a while).
>  
> Thanks,
>  
> Junping
>  
> ----- Original Message -----
> From: "Nan Zhu" <zhunansjtu@gmail.com (mailto:zhunansjtu@gmail.com)>
> To: general@hadoop.apache.org (mailto:general@hadoop.apache.org)
> Sent: Wednesday, October 23, 2013 8:15:51 PM
> Subject: Re: dynamically resizing Hadoop cluster on AWS?
>  
> Oh, I’m not running HDFS in the instances, I use S3 to save data
>  
> --  
> Nan Zhu
> School of Computer Science,
> McGill University
>  
>  
>  
> On Wednesday, October 23, 2013 at 11:11 PM, Nan Zhu wrote:
>  
> > Hi, all  
> >  
> > I’m running a Hadoop cluster on AWS EC2,  
> >  
> > I would like to dynamically resizing the cluster so as to reduce the cost, is there any solution to achieve this?  
> >  
> > E.g. I would like to cut the cluster size with a half, is it safe to just shutdown the instances (if some tasks are just running on them, can I rely on the speculative execution to re-run them on other nodes?)
> >  
> > I cannot use EMR, since I’m running a customized version of Hadoop  
> >  
> > Best,  
> >  
> > --  
> > Nan Zhu
> > School of Computer Science,
> > McGill University
> >  
> >  
>  
>  
>