You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by yaoxiaohua <ya...@outlook.com> on 2016/01/08 04:23:35 UTC

dfs.datanode.failed.volumes.tolerated change

Hi,

                The datanode process shutdown abnormally, 

                I set dfs.datanode.failed.volumes.tolerated =2 in
hdfs-site.xml.

                Because I found one disk is fail to access. 

                Then restart the datanode process, it works.

 

                One day later, we replace a good harddisk , and create
folder and chown to hadoop.

                Then I want to know if I don't restart datanode process,
when the datanode can know that 

                The disk is good now?

                May I have to restart datanode process to update this?

                

Env:Hadoop 2.6

 

 

Best Regards,

Evan 

 


RE: dfs.datanode.failed.volumes.tolerated change

Posted by yaoxiaohua <ya...@outlook.com>.
Hi Kai,

Ok, I will have a look and try, thanks for your  help on this.

 

Best Regards,

Evan

 

From: Zheng, Kai [mailto:kai.zheng@intel.com] 
Sent: 2016年1月8日 18:06
To: yaoxiaohua
Subject: RE: dfs.datanode.failed.volumes.tolerated change

 

Ref.
http://blog.cloudera.com/blog/2015/05/new-in-cdh-5-4-how-swapping-of-hdfs-da
tanode-drives/

 

Please discuss this in the public email thread you initiated, so others may
also help you, in case I don’t or can’t answer your questions.

 

Regards,

Kai

 

From: yaoxiaohua [mailto:yaoxiaohua@outlook.com] 
Sent: Friday, January 08, 2016 4:57 PM
To: Zheng, Kai <ka...@intel.com>
Subject: RE: dfs.datanode.failed.volumes.tolerated change

 

I don’t change anything in dfs.datanode.data.dir property,

I just change  dfs.datanode.failed.volumes.tolerated property.

Thanks for your help.

 

 

 

From: Zheng, Kai [mailto:kai.zheng@intel.com] 
Sent: 2016年1月8日 16:40
To: yaoxiaohua; user@hadoop.apache.org
Subject: RE: dfs.datanode.failed.volumes.tolerated change

 

As far as I know, Hadoop 2.6 supports disk hot-swapping on a DataNode
without restarting the DataNode. Roughly you need to do two operations: 

1) change dfs.datanode.data.dir in the DataNode configuration to update
according to your removed/added disks;

2) let the DataNode reload its configuration.

 

Please google and checkout related docs for this feature. Hope this helps.

 

Regards,

Kai

 

From: yaoxiaohua [mailto:yaoxiaohua@outlook.com] 
Sent: Friday, January 08, 2016 11:24 AM
To: user@hadoop.apache.org
Subject: dfs.datanode.failed.volumes.tolerated change

 

Hi,

                The datanode process shutdown abnormally, 

                I set dfs.datanode.failed.volumes.tolerated =2 in hdfs-site.
xml.

                Because I found one disk is fail to access. 

                Then restart the datanode process, it works.

 

                One day later, we replace a good harddisk , and create
folder and chown to hadoop.

                Then I want to know if I don’t restart datanode process,
when the datanode can know that 

                The disk is good now?

                May I have to restart datanode process to update this?

                

Env:Hadoop 2.6

 

 

Best Regards,

Evan 

 


RE: dfs.datanode.failed.volumes.tolerated change

Posted by yaoxiaohua <ya...@outlook.com>.
Hi Kai,

Ok, I will have a look and try, thanks for your  help on this.

 

Best Regards,

Evan

 

From: Zheng, Kai [mailto:kai.zheng@intel.com] 
Sent: 2016年1月8日 18:06
To: yaoxiaohua
Subject: RE: dfs.datanode.failed.volumes.tolerated change

 

Ref.
http://blog.cloudera.com/blog/2015/05/new-in-cdh-5-4-how-swapping-of-hdfs-da
tanode-drives/

 

Please discuss this in the public email thread you initiated, so others may
also help you, in case I don’t or can’t answer your questions.

 

Regards,

Kai

 

From: yaoxiaohua [mailto:yaoxiaohua@outlook.com] 
Sent: Friday, January 08, 2016 4:57 PM
To: Zheng, Kai <ka...@intel.com>
Subject: RE: dfs.datanode.failed.volumes.tolerated change

 

I don’t change anything in dfs.datanode.data.dir property,

I just change  dfs.datanode.failed.volumes.tolerated property.

Thanks for your help.

 

 

 

From: Zheng, Kai [mailto:kai.zheng@intel.com] 
Sent: 2016年1月8日 16:40
To: yaoxiaohua; user@hadoop.apache.org
Subject: RE: dfs.datanode.failed.volumes.tolerated change

 

As far as I know, Hadoop 2.6 supports disk hot-swapping on a DataNode
without restarting the DataNode. Roughly you need to do two operations: 

1) change dfs.datanode.data.dir in the DataNode configuration to update
according to your removed/added disks;

2) let the DataNode reload its configuration.

 

Please google and checkout related docs for this feature. Hope this helps.

 

Regards,

Kai

 

From: yaoxiaohua [mailto:yaoxiaohua@outlook.com] 
Sent: Friday, January 08, 2016 11:24 AM
To: user@hadoop.apache.org
Subject: dfs.datanode.failed.volumes.tolerated change

 

Hi,

                The datanode process shutdown abnormally, 

                I set dfs.datanode.failed.volumes.tolerated =2 in hdfs-site.
xml.

                Because I found one disk is fail to access. 

                Then restart the datanode process, it works.

 

                One day later, we replace a good harddisk , and create
folder and chown to hadoop.

                Then I want to know if I don’t restart datanode process,
when the datanode can know that 

                The disk is good now?

                May I have to restart datanode process to update this?

                

Env:Hadoop 2.6

 

 

Best Regards,

Evan 

 


RE: dfs.datanode.failed.volumes.tolerated change

Posted by yaoxiaohua <ya...@outlook.com>.
Hi Kai,

Ok, I will have a look and try, thanks for your  help on this.

 

Best Regards,

Evan

 

From: Zheng, Kai [mailto:kai.zheng@intel.com] 
Sent: 2016年1月8日 18:06
To: yaoxiaohua
Subject: RE: dfs.datanode.failed.volumes.tolerated change

 

Ref.
http://blog.cloudera.com/blog/2015/05/new-in-cdh-5-4-how-swapping-of-hdfs-da
tanode-drives/

 

Please discuss this in the public email thread you initiated, so others may
also help you, in case I don’t or can’t answer your questions.

 

Regards,

Kai

 

From: yaoxiaohua [mailto:yaoxiaohua@outlook.com] 
Sent: Friday, January 08, 2016 4:57 PM
To: Zheng, Kai <ka...@intel.com>
Subject: RE: dfs.datanode.failed.volumes.tolerated change

 

I don’t change anything in dfs.datanode.data.dir property,

I just change  dfs.datanode.failed.volumes.tolerated property.

Thanks for your help.

 

 

 

From: Zheng, Kai [mailto:kai.zheng@intel.com] 
Sent: 2016年1月8日 16:40
To: yaoxiaohua; user@hadoop.apache.org
Subject: RE: dfs.datanode.failed.volumes.tolerated change

 

As far as I know, Hadoop 2.6 supports disk hot-swapping on a DataNode
without restarting the DataNode. Roughly you need to do two operations: 

1) change dfs.datanode.data.dir in the DataNode configuration to update
according to your removed/added disks;

2) let the DataNode reload its configuration.

 

Please google and checkout related docs for this feature. Hope this helps.

 

Regards,

Kai

 

From: yaoxiaohua [mailto:yaoxiaohua@outlook.com] 
Sent: Friday, January 08, 2016 11:24 AM
To: user@hadoop.apache.org
Subject: dfs.datanode.failed.volumes.tolerated change

 

Hi,

                The datanode process shutdown abnormally, 

                I set dfs.datanode.failed.volumes.tolerated =2 in hdfs-site.
xml.

                Because I found one disk is fail to access. 

                Then restart the datanode process, it works.

 

                One day later, we replace a good harddisk , and create
folder and chown to hadoop.

                Then I want to know if I don’t restart datanode process,
when the datanode can know that 

                The disk is good now?

                May I have to restart datanode process to update this?

                

Env:Hadoop 2.6

 

 

Best Regards,

Evan 

 


RE: dfs.datanode.failed.volumes.tolerated change

Posted by yaoxiaohua <ya...@outlook.com>.
Hi Kai,

Ok, I will have a look and try, thanks for your  help on this.

 

Best Regards,

Evan

 

From: Zheng, Kai [mailto:kai.zheng@intel.com] 
Sent: 2016年1月8日 18:06
To: yaoxiaohua
Subject: RE: dfs.datanode.failed.volumes.tolerated change

 

Ref.
http://blog.cloudera.com/blog/2015/05/new-in-cdh-5-4-how-swapping-of-hdfs-da
tanode-drives/

 

Please discuss this in the public email thread you initiated, so others may
also help you, in case I don’t or can’t answer your questions.

 

Regards,

Kai

 

From: yaoxiaohua [mailto:yaoxiaohua@outlook.com] 
Sent: Friday, January 08, 2016 4:57 PM
To: Zheng, Kai <ka...@intel.com>
Subject: RE: dfs.datanode.failed.volumes.tolerated change

 

I don’t change anything in dfs.datanode.data.dir property,

I just change  dfs.datanode.failed.volumes.tolerated property.

Thanks for your help.

 

 

 

From: Zheng, Kai [mailto:kai.zheng@intel.com] 
Sent: 2016年1月8日 16:40
To: yaoxiaohua; user@hadoop.apache.org
Subject: RE: dfs.datanode.failed.volumes.tolerated change

 

As far as I know, Hadoop 2.6 supports disk hot-swapping on a DataNode
without restarting the DataNode. Roughly you need to do two operations: 

1) change dfs.datanode.data.dir in the DataNode configuration to update
according to your removed/added disks;

2) let the DataNode reload its configuration.

 

Please google and checkout related docs for this feature. Hope this helps.

 

Regards,

Kai

 

From: yaoxiaohua [mailto:yaoxiaohua@outlook.com] 
Sent: Friday, January 08, 2016 11:24 AM
To: user@hadoop.apache.org
Subject: dfs.datanode.failed.volumes.tolerated change

 

Hi,

                The datanode process shutdown abnormally, 

                I set dfs.datanode.failed.volumes.tolerated =2 in hdfs-site.
xml.

                Because I found one disk is fail to access. 

                Then restart the datanode process, it works.

 

                One day later, we replace a good harddisk , and create
folder and chown to hadoop.

                Then I want to know if I don’t restart datanode process,
when the datanode can know that 

                The disk is good now?

                May I have to restart datanode process to update this?

                

Env:Hadoop 2.6

 

 

Best Regards,

Evan 

 


RE: dfs.datanode.failed.volumes.tolerated change

Posted by "Zheng, Kai" <ka...@intel.com>.
As far as I know, Hadoop 2.6 supports disk hot-swapping on a DataNode without restarting the DataNode. Roughly you need to do two operations:
1) change dfs.datanode.data.dir in the DataNode configuration to update according to your removed/added disks;
2) let the DataNode reload its configuration.

Please google and checkout related docs for this feature. Hope this helps.

Regards,
Kai

From: yaoxiaohua [mailto:yaoxiaohua@outlook.com]
Sent: Friday, January 08, 2016 11:24 AM
To: user@hadoop.apache.org
Subject: dfs.datanode.failed.volumes.tolerated change

Hi,
                The datanode process shutdown abnormally,
                I set dfs.datanode.failed.volumes.tolerated =2 in hdfs-site.xml.
                Because I found one disk is fail to access.
                Then restart the datanode process, it works.

                One day later, we replace a good harddisk , and create folder and chown to hadoop.
                Then I want to know if I don't restart datanode process, when the datanode can know that
                The disk is good now?
                May I have to restart datanode process to update this?

Env:Hadoop 2.6


Best Regards,
Evan


RE: dfs.datanode.failed.volumes.tolerated change

Posted by "Zheng, Kai" <ka...@intel.com>.
As far as I know, Hadoop 2.6 supports disk hot-swapping on a DataNode without restarting the DataNode. Roughly you need to do two operations:
1) change dfs.datanode.data.dir in the DataNode configuration to update according to your removed/added disks;
2) let the DataNode reload its configuration.

Please google and checkout related docs for this feature. Hope this helps.

Regards,
Kai

From: yaoxiaohua [mailto:yaoxiaohua@outlook.com]
Sent: Friday, January 08, 2016 11:24 AM
To: user@hadoop.apache.org
Subject: dfs.datanode.failed.volumes.tolerated change

Hi,
                The datanode process shutdown abnormally,
                I set dfs.datanode.failed.volumes.tolerated =2 in hdfs-site.xml.
                Because I found one disk is fail to access.
                Then restart the datanode process, it works.

                One day later, we replace a good harddisk , and create folder and chown to hadoop.
                Then I want to know if I don't restart datanode process, when the datanode can know that
                The disk is good now?
                May I have to restart datanode process to update this?

Env:Hadoop 2.6


Best Regards,
Evan


RE: dfs.datanode.failed.volumes.tolerated change

Posted by "Zheng, Kai" <ka...@intel.com>.
As far as I know, Hadoop 2.6 supports disk hot-swapping on a DataNode without restarting the DataNode. Roughly you need to do two operations:
1) change dfs.datanode.data.dir in the DataNode configuration to update according to your removed/added disks;
2) let the DataNode reload its configuration.

Please google and checkout related docs for this feature. Hope this helps.

Regards,
Kai

From: yaoxiaohua [mailto:yaoxiaohua@outlook.com]
Sent: Friday, January 08, 2016 11:24 AM
To: user@hadoop.apache.org
Subject: dfs.datanode.failed.volumes.tolerated change

Hi,
                The datanode process shutdown abnormally,
                I set dfs.datanode.failed.volumes.tolerated =2 in hdfs-site.xml.
                Because I found one disk is fail to access.
                Then restart the datanode process, it works.

                One day later, we replace a good harddisk , and create folder and chown to hadoop.
                Then I want to know if I don't restart datanode process, when the datanode can know that
                The disk is good now?
                May I have to restart datanode process to update this?

Env:Hadoop 2.6


Best Regards,
Evan


RE: dfs.datanode.failed.volumes.tolerated change

Posted by "Zheng, Kai" <ka...@intel.com>.
As far as I know, Hadoop 2.6 supports disk hot-swapping on a DataNode without restarting the DataNode. Roughly you need to do two operations:
1) change dfs.datanode.data.dir in the DataNode configuration to update according to your removed/added disks;
2) let the DataNode reload its configuration.

Please google and checkout related docs for this feature. Hope this helps.

Regards,
Kai

From: yaoxiaohua [mailto:yaoxiaohua@outlook.com]
Sent: Friday, January 08, 2016 11:24 AM
To: user@hadoop.apache.org
Subject: dfs.datanode.failed.volumes.tolerated change

Hi,
                The datanode process shutdown abnormally,
                I set dfs.datanode.failed.volumes.tolerated =2 in hdfs-site.xml.
                Because I found one disk is fail to access.
                Then restart the datanode process, it works.

                One day later, we replace a good harddisk , and create folder and chown to hadoop.
                Then I want to know if I don't restart datanode process, when the datanode can know that
                The disk is good now?
                May I have to restart datanode process to update this?

Env:Hadoop 2.6


Best Regards,
Evan