You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@hadoop.apache.org by "Sebastian.Lehrack" <Se...@physik.uni-muenchen.de> on 2012/11/07 17:48:24 UTC

fsck only working on namenode

Hi,

I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
it's working fine.
Recently, i had to use fsck in a map-process, which leads to a
connection refused error.
I read about this error, that i should check about firewalls and proper
configfiles etc.
The command is only working on the namenode.
If i use the browser for the command, it's working (although also
refused, but because of webusers permission)
I can use telnet to connect to the namenode.
In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
IP-adress and Hostname. I marked it as final.
I'm still getting this connecting refused error, when using fsck on a
node other then the namenode.

Any further suggesting would be great. The fsck command is used to check
the numbers of block, in which a file is stored on the hdfs. Maybe
there's another possibility?

Greetings

RE: fsck only working on namenode

Posted by Brahma Reddy Battula <br...@huawei.com>.
 wherever you are running fsck command,it's not getting dfs.http.address(Might be some other configurations are there in classpath where dfs.http.address not configured).

Please check classpath...
________________________________________
From: 梁李印 [liyin.liangly@aliyun-inc.com]
Sent: Thursday, November 08, 2012 4:58 PM
To: user@hadoop.apache.org
Subject: 答复: fsck only working on namenode

Spelling mistake? dfs.http.adress --> dfs.http.address

Liyin Liang
-----邮件原件-----
发件人: Sebastian.Lehrack [mailto:Sebastian.Lehrack@physik.uni-muenchen.de]
发送时间: 2012年11月8日 0:48
收件人: user@hadoop.apache.org
主题: fsck only working on namenode

Hi,

I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
it's working fine.
Recently, i had to use fsck in a map-process, which leads to a
connection refused error.
I read about this error, that i should check about firewalls and proper
configfiles etc.
The command is only working on the namenode.
If i use the browser for the command, it's working (although also
refused, but because of webusers permission)
I can use telnet to connect to the namenode.
In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
IP-adress and Hostname. I marked it as final.
I'm still getting this connecting refused error, when using fsck on a
node other then the namenode.

Any further suggesting would be great. The fsck command is used to check
the numbers of block, in which a file is stored on the hdfs. Maybe
there's another possibility?

Greetings


RE: fsck only working on namenode

Posted by Brahma Reddy Battula <br...@huawei.com>.
 wherever you are running fsck command,it's not getting dfs.http.address(Might be some other configurations are there in classpath where dfs.http.address not configured).

Please check classpath...
________________________________________
From: 梁李印 [liyin.liangly@aliyun-inc.com]
Sent: Thursday, November 08, 2012 4:58 PM
To: user@hadoop.apache.org
Subject: 答复: fsck only working on namenode

Spelling mistake? dfs.http.adress --> dfs.http.address

Liyin Liang
-----邮件原件-----
发件人: Sebastian.Lehrack [mailto:Sebastian.Lehrack@physik.uni-muenchen.de]
发送时间: 2012年11月8日 0:48
收件人: user@hadoop.apache.org
主题: fsck only working on namenode

Hi,

I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
it's working fine.
Recently, i had to use fsck in a map-process, which leads to a
connection refused error.
I read about this error, that i should check about firewalls and proper
configfiles etc.
The command is only working on the namenode.
If i use the browser for the command, it's working (although also
refused, but because of webusers permission)
I can use telnet to connect to the namenode.
In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
IP-adress and Hostname. I marked it as final.
I'm still getting this connecting refused error, when using fsck on a
node other then the namenode.

Any further suggesting would be great. The fsck command is used to check
the numbers of block, in which a file is stored on the hdfs. Maybe
there's another possibility?

Greetings


RE: fsck only working on namenode

Posted by Brahma Reddy Battula <br...@huawei.com>.
 wherever you are running fsck command,it's not getting dfs.http.address(Might be some other configurations are there in classpath where dfs.http.address not configured).

Please check classpath...
________________________________________
From: 梁李印 [liyin.liangly@aliyun-inc.com]
Sent: Thursday, November 08, 2012 4:58 PM
To: user@hadoop.apache.org
Subject: 答复: fsck only working on namenode

Spelling mistake? dfs.http.adress --> dfs.http.address

Liyin Liang
-----邮件原件-----
发件人: Sebastian.Lehrack [mailto:Sebastian.Lehrack@physik.uni-muenchen.de]
发送时间: 2012年11月8日 0:48
收件人: user@hadoop.apache.org
主题: fsck only working on namenode

Hi,

I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
it's working fine.
Recently, i had to use fsck in a map-process, which leads to a
connection refused error.
I read about this error, that i should check about firewalls and proper
configfiles etc.
The command is only working on the namenode.
If i use the browser for the command, it's working (although also
refused, but because of webusers permission)
I can use telnet to connect to the namenode.
In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
IP-adress and Hostname. I marked it as final.
I'm still getting this connecting refused error, when using fsck on a
node other then the namenode.

Any further suggesting would be great. The fsck command is used to check
the numbers of block, in which a file is stored on the hdfs. Maybe
there's another possibility?

Greetings


RE: fsck only working on namenode

Posted by Brahma Reddy Battula <br...@huawei.com>.
 wherever you are running fsck command,it's not getting dfs.http.address(Might be some other configurations are there in classpath where dfs.http.address not configured).

Please check classpath...
________________________________________
From: 梁李印 [liyin.liangly@aliyun-inc.com]
Sent: Thursday, November 08, 2012 4:58 PM
To: user@hadoop.apache.org
Subject: 答复: fsck only working on namenode

Spelling mistake? dfs.http.adress --> dfs.http.address

Liyin Liang
-----邮件原件-----
发件人: Sebastian.Lehrack [mailto:Sebastian.Lehrack@physik.uni-muenchen.de]
发送时间: 2012年11月8日 0:48
收件人: user@hadoop.apache.org
主题: fsck only working on namenode

Hi,

I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
it's working fine.
Recently, i had to use fsck in a map-process, which leads to a
connection refused error.
I read about this error, that i should check about firewalls and proper
configfiles etc.
The command is only working on the namenode.
If i use the browser for the command, it's working (although also
refused, but because of webusers permission)
I can use telnet to connect to the namenode.
In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
IP-adress and Hostname. I marked it as final.
I'm still getting this connecting refused error, when using fsck on a
node other then the namenode.

Any further suggesting would be great. The fsck command is used to check
the numbers of block, in which a file is stored on the hdfs. Maybe
there's another possibility?

Greetings


答复: fsck only working on namenode

Posted by 梁李印 <li...@aliyun-inc.com>.
Spelling mistake? dfs.http.adress --> dfs.http.address

Liyin Liang
-----邮件原件-----
发件人: Sebastian.Lehrack [mailto:Sebastian.Lehrack@physik.uni-muenchen.de] 
发送时间: 2012年11月8日 0:48
收件人: user@hadoop.apache.org
主题: fsck only working on namenode

Hi,

I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
it's working fine.
Recently, i had to use fsck in a map-process, which leads to a
connection refused error.
I read about this error, that i should check about firewalls and proper
configfiles etc.
The command is only working on the namenode.
If i use the browser for the command, it's working (although also
refused, but because of webusers permission)
I can use telnet to connect to the namenode.
In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
IP-adress and Hostname. I marked it as final.
I'm still getting this connecting refused error, when using fsck on a
node other then the namenode.

Any further suggesting would be great. The fsck command is used to check
the numbers of block, in which a file is stored on the hdfs. Maybe
there's another possibility?

Greetings


Re: fsck only working on namenode

Posted by Harsh J <ha...@cloudera.com>.
While your problem is interesting, you need not use FSCK to get block
IDs of a file, as thats not the right way to fetch it (its a rather
long, should-be-disallowed route). You can leverage the FileSystem API
itself to do that. See FileSystem#getFileBlockLocations(…), i.e.
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#getFileBlockLocations(org.apache.hadoop.fs.FileStatus,%20long,%20long)
if you use FileSystem APIs, or see FileContext#listLocatedStatus(…)
i.e. http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#listLocatedStatus(org.apache.hadoop.fs.Path)
if you use FileContext APIs.

Onto your problem though, can you successfully do a `telnet NNHOST
50070` from one of your slave nodes?

On Wed, Nov 7, 2012 at 10:18 PM, Sebastian.Lehrack
<Se...@physik.uni-muenchen.de> wrote:
> Hi,
>
> I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
> it's working fine.
> Recently, i had to use fsck in a map-process, which leads to a
> connection refused error.
> I read about this error, that i should check about firewalls and proper
> configfiles etc.
> The command is only working on the namenode.
> If i use the browser for the command, it's working (although also
> refused, but because of webusers permission)
> I can use telnet to connect to the namenode.
> In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
> IP-adress and Hostname. I marked it as final.
> I'm still getting this connecting refused error, when using fsck on a
> node other then the namenode.
>
> Any further suggesting would be great. The fsck command is used to check
> the numbers of block, in which a file is stored on the hdfs. Maybe
> there's another possibility?
>
> Greetings



-- 
Harsh J

Re: fsck only working on namenode

Posted by Harsh J <ha...@cloudera.com>.
While your problem is interesting, you need not use FSCK to get block
IDs of a file, as thats not the right way to fetch it (its a rather
long, should-be-disallowed route). You can leverage the FileSystem API
itself to do that. See FileSystem#getFileBlockLocations(…), i.e.
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#getFileBlockLocations(org.apache.hadoop.fs.FileStatus,%20long,%20long)
if you use FileSystem APIs, or see FileContext#listLocatedStatus(…)
i.e. http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#listLocatedStatus(org.apache.hadoop.fs.Path)
if you use FileContext APIs.

Onto your problem though, can you successfully do a `telnet NNHOST
50070` from one of your slave nodes?

On Wed, Nov 7, 2012 at 10:18 PM, Sebastian.Lehrack
<Se...@physik.uni-muenchen.de> wrote:
> Hi,
>
> I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
> it's working fine.
> Recently, i had to use fsck in a map-process, which leads to a
> connection refused error.
> I read about this error, that i should check about firewalls and proper
> configfiles etc.
> The command is only working on the namenode.
> If i use the browser for the command, it's working (although also
> refused, but because of webusers permission)
> I can use telnet to connect to the namenode.
> In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
> IP-adress and Hostname. I marked it as final.
> I'm still getting this connecting refused error, when using fsck on a
> node other then the namenode.
>
> Any further suggesting would be great. The fsck command is used to check
> the numbers of block, in which a file is stored on the hdfs. Maybe
> there's another possibility?
>
> Greetings



-- 
Harsh J

答复: fsck only working on namenode

Posted by 梁李印 <li...@aliyun-inc.com>.
Spelling mistake? dfs.http.adress --> dfs.http.address

Liyin Liang
-----邮件原件-----
发件人: Sebastian.Lehrack [mailto:Sebastian.Lehrack@physik.uni-muenchen.de] 
发送时间: 2012年11月8日 0:48
收件人: user@hadoop.apache.org
主题: fsck only working on namenode

Hi,

I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
it's working fine.
Recently, i had to use fsck in a map-process, which leads to a
connection refused error.
I read about this error, that i should check about firewalls and proper
configfiles etc.
The command is only working on the namenode.
If i use the browser for the command, it's working (although also
refused, but because of webusers permission)
I can use telnet to connect to the namenode.
In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
IP-adress and Hostname. I marked it as final.
I'm still getting this connecting refused error, when using fsck on a
node other then the namenode.

Any further suggesting would be great. The fsck command is used to check
the numbers of block, in which a file is stored on the hdfs. Maybe
there's another possibility?

Greetings


答复: fsck only working on namenode

Posted by 梁李印 <li...@aliyun-inc.com>.
Spelling mistake? dfs.http.adress --> dfs.http.address

Liyin Liang
-----邮件原件-----
发件人: Sebastian.Lehrack [mailto:Sebastian.Lehrack@physik.uni-muenchen.de] 
发送时间: 2012年11月8日 0:48
收件人: user@hadoop.apache.org
主题: fsck only working on namenode

Hi,

I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
it's working fine.
Recently, i had to use fsck in a map-process, which leads to a
connection refused error.
I read about this error, that i should check about firewalls and proper
configfiles etc.
The command is only working on the namenode.
If i use the browser for the command, it's working (although also
refused, but because of webusers permission)
I can use telnet to connect to the namenode.
In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
IP-adress and Hostname. I marked it as final.
I'm still getting this connecting refused error, when using fsck on a
node other then the namenode.

Any further suggesting would be great. The fsck command is used to check
the numbers of block, in which a file is stored on the hdfs. Maybe
there's another possibility?

Greetings


答复: fsck only working on namenode

Posted by 梁李印 <li...@aliyun-inc.com>.
Spelling mistake? dfs.http.adress --> dfs.http.address

Liyin Liang
-----邮件原件-----
发件人: Sebastian.Lehrack [mailto:Sebastian.Lehrack@physik.uni-muenchen.de] 
发送时间: 2012年11月8日 0:48
收件人: user@hadoop.apache.org
主题: fsck only working on namenode

Hi,

I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
it's working fine.
Recently, i had to use fsck in a map-process, which leads to a
connection refused error.
I read about this error, that i should check about firewalls and proper
configfiles etc.
The command is only working on the namenode.
If i use the browser for the command, it's working (although also
refused, but because of webusers permission)
I can use telnet to connect to the namenode.
In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
IP-adress and Hostname. I marked it as final.
I'm still getting this connecting refused error, when using fsck on a
node other then the namenode.

Any further suggesting would be great. The fsck command is used to check
the numbers of block, in which a file is stored on the hdfs. Maybe
there's another possibility?

Greetings


Re: fsck only working on namenode

Posted by Harsh J <ha...@cloudera.com>.
While your problem is interesting, you need not use FSCK to get block
IDs of a file, as thats not the right way to fetch it (its a rather
long, should-be-disallowed route). You can leverage the FileSystem API
itself to do that. See FileSystem#getFileBlockLocations(…), i.e.
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#getFileBlockLocations(org.apache.hadoop.fs.FileStatus,%20long,%20long)
if you use FileSystem APIs, or see FileContext#listLocatedStatus(…)
i.e. http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#listLocatedStatus(org.apache.hadoop.fs.Path)
if you use FileContext APIs.

Onto your problem though, can you successfully do a `telnet NNHOST
50070` from one of your slave nodes?

On Wed, Nov 7, 2012 at 10:18 PM, Sebastian.Lehrack
<Se...@physik.uni-muenchen.de> wrote:
> Hi,
>
> I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
> it's working fine.
> Recently, i had to use fsck in a map-process, which leads to a
> connection refused error.
> I read about this error, that i should check about firewalls and proper
> configfiles etc.
> The command is only working on the namenode.
> If i use the browser for the command, it's working (although also
> refused, but because of webusers permission)
> I can use telnet to connect to the namenode.
> In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
> IP-adress and Hostname. I marked it as final.
> I'm still getting this connecting refused error, when using fsck on a
> node other then the namenode.
>
> Any further suggesting would be great. The fsck command is used to check
> the numbers of block, in which a file is stored on the hdfs. Maybe
> there's another possibility?
>
> Greetings



-- 
Harsh J

Re: fsck only working on namenode

Posted by Harsh J <ha...@cloudera.com>.
While your problem is interesting, you need not use FSCK to get block
IDs of a file, as thats not the right way to fetch it (its a rather
long, should-be-disallowed route). You can leverage the FileSystem API
itself to do that. See FileSystem#getFileBlockLocations(…), i.e.
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#getFileBlockLocations(org.apache.hadoop.fs.FileStatus,%20long,%20long)
if you use FileSystem APIs, or see FileContext#listLocatedStatus(…)
i.e. http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileContext.html#listLocatedStatus(org.apache.hadoop.fs.Path)
if you use FileContext APIs.

Onto your problem though, can you successfully do a `telnet NNHOST
50070` from one of your slave nodes?

On Wed, Nov 7, 2012 at 10:18 PM, Sebastian.Lehrack
<Se...@physik.uni-muenchen.de> wrote:
> Hi,
>
> I've installed hadoop 1.0.3 on a cluster of about 25 nodes and till now,
> it's working fine.
> Recently, i had to use fsck in a map-process, which leads to a
> connection refused error.
> I read about this error, that i should check about firewalls and proper
> configfiles etc.
> The command is only working on the namenode.
> If i use the browser for the command, it's working (although also
> refused, but because of webusers permission)
> I can use telnet to connect to the namenode.
> In hdfs-site.conf, i set dfs.http.adress to hostname:50070. I tried
> IP-adress and Hostname. I marked it as final.
> I'm still getting this connecting refused error, when using fsck on a
> node other then the namenode.
>
> Any further suggesting would be great. The fsck command is used to check
> the numbers of block, in which a file is stored on the hdfs. Maybe
> there's another possibility?
>
> Greetings



-- 
Harsh J