You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by chenchun <ch...@gmail.com> on 2013/12/11 15:01:21 UTC

empty file

Hi,  
I find some files on hdfs which command “hadoop fs -ls” tells they are not empty. But command “fsck” tells that  these files have no replications. Is it normal?

$ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
Found 1 items
-rw-r--r--   3 sankuai supergroup    1123927 2013-12-06 03:22 /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo


$  /opt/local/hadoop/bin/hadoop fsck /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -files -blocks -locations -racks
FSCK started by sankuai from /10.64.10.102 for path /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST 2013
Status: HEALTHY
 Total size:    0 B (Total open files size: 1123927 B)
 Total dirs:    0
 Total files:   0 (Files currently being written: 1)
 Total blocks (validated):   0 (Total open file blocks (not validated): 1)
 Minimally replicated blocks:   0
 Over-replicated blocks:     0
 Under-replicated blocks:    0
 Mis-replicated blocks:      0
 Default replication factor: 3
 Average block replication:  0.0
 Corrupt blocks:             0
 Missing replicas:           0
 Number of data-nodes:       38
 Number of racks:            6
FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds


The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo' is HEALTHY  

--  
chenchun


Re: empty file

Posted by chenchun <ch...@gmail.com>.
Thanks, all of you. The file is indeed an 'open' state. Is there any way I can read it ?

$ /opt/local/hadoop/bin/hadoop fsck /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -openforwrite  
FSCK started by sankuai from /10.64.10.102 for path /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Thu Dec 12 14:57:41 CST 2013
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo 1123927 bytes, 1 block(s), OPENFORWRITE:  
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo:  Under replicated blk_-5402857470524491959_58312275. Target Replicas is 3 but found 2 replica(s).
Status: HEALTHY
 Total size:    1123927 B
 Total dirs:    0
 Total files:   1
 Total blocks (validated):      1 (avg. block size 1123927 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       1 (100.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     2.0
 Corrupt blocks:                0
 Missing replicas:              1 (50.0 %)
 Number of data-nodes:          38
 Number of racks:               6
FSCK ended at Thu Dec 12 14:57:41 CST 2013 in 1 milliseconds


The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo' is HEALTHY  

--  
chenchun


On Thursday, 12 December, 2013 at 12:09 PM, Harsh J wrote:

> That file is still in an 'open' state. Running the below may show it up:
>  
> /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -openforwrite
>  
> On Thu, Dec 12, 2013 at 9:22 AM, chenchun <chenchun.feed@gmail.com (mailto:chenchun.feed@gmail.com)> wrote:
> > Nothing is still writing to it. I can't read that file.
> > I'm using hadoop 1.0.1.
> >  
> > $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> > 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:51390,
> > remote=/10.64.32.14:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:41277,
> > remote=/10.64.32.36:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block
> > blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> > live nodes contain current block. Will get new block locations from namenode
> > and retry...
> > 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:51403,
> > remote=/10.64.32.14:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:41290,
> > remote=/10.64.32.36:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block
> > blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> > live nodes contain current block. Will get new block locations from namenode
> > and retry...
> >  
> > --
> > chenchun
> >  
> > On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:
> >  
> > Is something still writing to it?
> > ...
> > Total files: 0 (Files currently being written: 1)
> > Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> >  
> >  
> >  
> > On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <kawa.adam@gmail.com (mailto:kawa.adam@gmail.com)> wrote:
> >  
> > i have never seen something like that.
> >  
> > Can you read that file?
> >  
> > $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> >  
> >  
> > 2013/12/11 chenchun <chenchun.feed@gmail.com (mailto:chenchun.feed@gmail.com)>
> >  
> >  
> > Hi,
> > I find some files on hdfs which command “hadoop fs -ls” tells they are not
> > empty. But command “fsck” tells that these files have no replications. Is
> > it normal?
> >  
> > $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> > Found 1 items
> > -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> >  
> > $ /opt/local/hadoop/bin/hadoop fsck
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> > -racks
> > FSCK started by sankuai from /10.64.10.102 for path
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> > 2013
> > Status: HEALTHY
> > Total size: 0 B (Total open files size: 1123927 B)
> > Total dirs: 0
> > Total files: 0 (Files currently being written: 1)
> > Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> > Minimally replicated blocks: 0
> > Over-replicated blocks: 0
> > Under-replicated blocks: 0
> > Mis-replicated blocks: 0
> > Default replication factor: 3
> > Average block replication: 0.0
> > Corrupt blocks: 0
> > Missing replicas: 0
> > Number of data-nodes: 38
> > Number of racks: 6
> > FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
> >  
> >  
> > The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> > is HEALTHY
> >  
> > --
> > chenchun
> >  
>  
>  
>  
>  
> --  
> Harsh J
>  
>  



Re: empty file

Posted by chenchun <ch...@gmail.com>.
Thanks, all of you. The file is indeed an 'open' state. Is there any way I can read it ?

$ /opt/local/hadoop/bin/hadoop fsck /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -openforwrite  
FSCK started by sankuai from /10.64.10.102 for path /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Thu Dec 12 14:57:41 CST 2013
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo 1123927 bytes, 1 block(s), OPENFORWRITE:  
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo:  Under replicated blk_-5402857470524491959_58312275. Target Replicas is 3 but found 2 replica(s).
Status: HEALTHY
 Total size:    1123927 B
 Total dirs:    0
 Total files:   1
 Total blocks (validated):      1 (avg. block size 1123927 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       1 (100.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     2.0
 Corrupt blocks:                0
 Missing replicas:              1 (50.0 %)
 Number of data-nodes:          38
 Number of racks:               6
FSCK ended at Thu Dec 12 14:57:41 CST 2013 in 1 milliseconds


The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo' is HEALTHY  

--  
chenchun


On Thursday, 12 December, 2013 at 12:09 PM, Harsh J wrote:

> That file is still in an 'open' state. Running the below may show it up:
>  
> /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -openforwrite
>  
> On Thu, Dec 12, 2013 at 9:22 AM, chenchun <chenchun.feed@gmail.com (mailto:chenchun.feed@gmail.com)> wrote:
> > Nothing is still writing to it. I can't read that file.
> > I'm using hadoop 1.0.1.
> >  
> > $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> > 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:51390,
> > remote=/10.64.32.14:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:41277,
> > remote=/10.64.32.36:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block
> > blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> > live nodes contain current block. Will get new block locations from namenode
> > and retry...
> > 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:51403,
> > remote=/10.64.32.14:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:41290,
> > remote=/10.64.32.36:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block
> > blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> > live nodes contain current block. Will get new block locations from namenode
> > and retry...
> >  
> > --
> > chenchun
> >  
> > On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:
> >  
> > Is something still writing to it?
> > ...
> > Total files: 0 (Files currently being written: 1)
> > Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> >  
> >  
> >  
> > On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <kawa.adam@gmail.com (mailto:kawa.adam@gmail.com)> wrote:
> >  
> > i have never seen something like that.
> >  
> > Can you read that file?
> >  
> > $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> >  
> >  
> > 2013/12/11 chenchun <chenchun.feed@gmail.com (mailto:chenchun.feed@gmail.com)>
> >  
> >  
> > Hi,
> > I find some files on hdfs which command “hadoop fs -ls” tells they are not
> > empty. But command “fsck” tells that these files have no replications. Is
> > it normal?
> >  
> > $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> > Found 1 items
> > -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> >  
> > $ /opt/local/hadoop/bin/hadoop fsck
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> > -racks
> > FSCK started by sankuai from /10.64.10.102 for path
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> > 2013
> > Status: HEALTHY
> > Total size: 0 B (Total open files size: 1123927 B)
> > Total dirs: 0
> > Total files: 0 (Files currently being written: 1)
> > Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> > Minimally replicated blocks: 0
> > Over-replicated blocks: 0
> > Under-replicated blocks: 0
> > Mis-replicated blocks: 0
> > Default replication factor: 3
> > Average block replication: 0.0
> > Corrupt blocks: 0
> > Missing replicas: 0
> > Number of data-nodes: 38
> > Number of racks: 6
> > FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
> >  
> >  
> > The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> > is HEALTHY
> >  
> > --
> > chenchun
> >  
>  
>  
>  
>  
> --  
> Harsh J
>  
>  



Re: empty file

Posted by chenchun <ch...@gmail.com>.
Thanks, all of you. The file is indeed an 'open' state. Is there any way I can read it ?

$ /opt/local/hadoop/bin/hadoop fsck /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -openforwrite  
FSCK started by sankuai from /10.64.10.102 for path /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Thu Dec 12 14:57:41 CST 2013
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo 1123927 bytes, 1 block(s), OPENFORWRITE:  
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo:  Under replicated blk_-5402857470524491959_58312275. Target Replicas is 3 but found 2 replica(s).
Status: HEALTHY
 Total size:    1123927 B
 Total dirs:    0
 Total files:   1
 Total blocks (validated):      1 (avg. block size 1123927 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       1 (100.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     2.0
 Corrupt blocks:                0
 Missing replicas:              1 (50.0 %)
 Number of data-nodes:          38
 Number of racks:               6
FSCK ended at Thu Dec 12 14:57:41 CST 2013 in 1 milliseconds


The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo' is HEALTHY  

--  
chenchun


On Thursday, 12 December, 2013 at 12:09 PM, Harsh J wrote:

> That file is still in an 'open' state. Running the below may show it up:
>  
> /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -openforwrite
>  
> On Thu, Dec 12, 2013 at 9:22 AM, chenchun <chenchun.feed@gmail.com (mailto:chenchun.feed@gmail.com)> wrote:
> > Nothing is still writing to it. I can't read that file.
> > I'm using hadoop 1.0.1.
> >  
> > $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> > 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:51390,
> > remote=/10.64.32.14:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:41277,
> > remote=/10.64.32.36:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block
> > blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> > live nodes contain current block. Will get new block locations from namenode
> > and retry...
> > 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:51403,
> > remote=/10.64.32.14:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:41290,
> > remote=/10.64.32.36:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block
> > blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> > live nodes contain current block. Will get new block locations from namenode
> > and retry...
> >  
> > --
> > chenchun
> >  
> > On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:
> >  
> > Is something still writing to it?
> > ...
> > Total files: 0 (Files currently being written: 1)
> > Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> >  
> >  
> >  
> > On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <kawa.adam@gmail.com (mailto:kawa.adam@gmail.com)> wrote:
> >  
> > i have never seen something like that.
> >  
> > Can you read that file?
> >  
> > $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> >  
> >  
> > 2013/12/11 chenchun <chenchun.feed@gmail.com (mailto:chenchun.feed@gmail.com)>
> >  
> >  
> > Hi,
> > I find some files on hdfs which command “hadoop fs -ls” tells they are not
> > empty. But command “fsck” tells that these files have no replications. Is
> > it normal?
> >  
> > $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> > Found 1 items
> > -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> >  
> > $ /opt/local/hadoop/bin/hadoop fsck
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> > -racks
> > FSCK started by sankuai from /10.64.10.102 for path
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> > 2013
> > Status: HEALTHY
> > Total size: 0 B (Total open files size: 1123927 B)
> > Total dirs: 0
> > Total files: 0 (Files currently being written: 1)
> > Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> > Minimally replicated blocks: 0
> > Over-replicated blocks: 0
> > Under-replicated blocks: 0
> > Mis-replicated blocks: 0
> > Default replication factor: 3
> > Average block replication: 0.0
> > Corrupt blocks: 0
> > Missing replicas: 0
> > Number of data-nodes: 38
> > Number of racks: 6
> > FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
> >  
> >  
> > The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> > is HEALTHY
> >  
> > --
> > chenchun
> >  
>  
>  
>  
>  
> --  
> Harsh J
>  
>  



Re: empty file

Posted by chenchun <ch...@gmail.com>.
Thanks, all of you. The file is indeed an 'open' state. Is there any way I can read it ?

$ /opt/local/hadoop/bin/hadoop fsck /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -openforwrite  
FSCK started by sankuai from /10.64.10.102 for path /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Thu Dec 12 14:57:41 CST 2013
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo 1123927 bytes, 1 block(s), OPENFORWRITE:  
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo:  Under replicated blk_-5402857470524491959_58312275. Target Replicas is 3 but found 2 replica(s).
Status: HEALTHY
 Total size:    1123927 B
 Total dirs:    0
 Total files:   1
 Total blocks (validated):      1 (avg. block size 1123927 B)
 Minimally replicated blocks:   1 (100.0 %)
 Over-replicated blocks:        0 (0.0 %)
 Under-replicated blocks:       1 (100.0 %)
 Mis-replicated blocks:         0 (0.0 %)
 Default replication factor:    3
 Average block replication:     2.0
 Corrupt blocks:                0
 Missing replicas:              1 (50.0 %)
 Number of data-nodes:          38
 Number of racks:               6
FSCK ended at Thu Dec 12 14:57:41 CST 2013 in 1 milliseconds


The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo' is HEALTHY  

--  
chenchun


On Thursday, 12 December, 2013 at 12:09 PM, Harsh J wrote:

> That file is still in an 'open' state. Running the below may show it up:
>  
> /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -openforwrite
>  
> On Thu, Dec 12, 2013 at 9:22 AM, chenchun <chenchun.feed@gmail.com (mailto:chenchun.feed@gmail.com)> wrote:
> > Nothing is still writing to it. I can't read that file.
> > I'm using hadoop 1.0.1.
> >  
> > $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> > 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:51390,
> > remote=/10.64.32.14:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:41277,
> > remote=/10.64.32.36:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block
> > blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> > live nodes contain current block. Will get new block locations from namenode
> > and retry...
> > 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:51403,
> > remote=/10.64.32.14:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> > /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> > error for OP_READ_BLOCK, self=/10.64.10.102:41290,
> > remote=/10.64.32.36:50010, for file
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> > -5402857470524491959_58312275
> > 13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block
> > blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> > live nodes contain current block. Will get new block locations from namenode
> > and retry...
> >  
> > --
> > chenchun
> >  
> > On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:
> >  
> > Is something still writing to it?
> > ...
> > Total files: 0 (Files currently being written: 1)
> > Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> >  
> >  
> >  
> > On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <kawa.adam@gmail.com (mailto:kawa.adam@gmail.com)> wrote:
> >  
> > i have never seen something like that.
> >  
> > Can you read that file?
> >  
> > $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> >  
> >  
> > 2013/12/11 chenchun <chenchun.feed@gmail.com (mailto:chenchun.feed@gmail.com)>
> >  
> >  
> > Hi,
> > I find some files on hdfs which command “hadoop fs -ls” tells they are not
> > empty. But command “fsck” tells that these files have no replications. Is
> > it normal?
> >  
> > $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> > Found 1 items
> > -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> >  
> > $ /opt/local/hadoop/bin/hadoop fsck
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> > -racks
> > FSCK started by sankuai from /10.64.10.102 for path
> > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> > 2013
> > Status: HEALTHY
> > Total size: 0 B (Total open files size: 1123927 B)
> > Total dirs: 0
> > Total files: 0 (Files currently being written: 1)
> > Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> > Minimally replicated blocks: 0
> > Over-replicated blocks: 0
> > Under-replicated blocks: 0
> > Mis-replicated blocks: 0
> > Default replication factor: 3
> > Average block replication: 0.0
> > Corrupt blocks: 0
> > Missing replicas: 0
> > Number of data-nodes: 38
> > Number of racks: 6
> > FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
> >  
> >  
> > The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> > is HEALTHY
> >  
> > --
> > chenchun
> >  
>  
>  
>  
>  
> --  
> Harsh J
>  
>  



Re: empty file

Posted by Harsh J <ha...@cloudera.com>.
That file is still in an 'open' state. Running the below may show it up:

/opt/local/hadoop/bin/hadoop fsck
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -openforwrite

On Thu, Dec 12, 2013 at 9:22 AM, chenchun <ch...@gmail.com> wrote:
> Nothing is still writing to it. I can't read that file.
> I'm using hadoop 1.0.1.
>
> $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:51390,
> remote=/10.64.32.14:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:41277,
> remote=/10.64.32.36:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block
> blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> live nodes contain current block. Will get new block locations from namenode
> and retry...
> 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:51403,
> remote=/10.64.32.14:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:41290,
> remote=/10.64.32.36:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block
> blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> live nodes contain current block. Will get new block locations from namenode
> and retry...
>
> --
> chenchun
>
> On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:
>
> Is something still writing to it?
> ...
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
>
>
>
> On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <ka...@gmail.com> wrote:
>
> i have never seen something like that.
>
> Can you read that file?
>
> $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
>
> 2013/12/11 chenchun <ch...@gmail.com>
>
>
> Hi,
> I find some files on hdfs which command “hadoop fs -ls” tells they are not
> empty. But command “fsck” tells that these files have no replications. Is
> it normal?
>
> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> Found 1 items
> -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
> $ /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> -racks
> FSCK started by sankuai from /10.64.10.102 for path
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> 2013
> Status: HEALTHY
> Total size: 0 B (Total open files size: 1123927 B)
> Total dirs: 0
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> Minimally replicated blocks: 0
> Over-replicated blocks: 0
> Under-replicated blocks: 0
> Mis-replicated blocks: 0
> Default replication factor: 3
> Average block replication: 0.0
> Corrupt blocks: 0
> Missing replicas: 0
> Number of data-nodes: 38
> Number of racks: 6
> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>
>
> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> is HEALTHY
>
> --
> chenchun
>
>



-- 
Harsh J

Re: empty file

Posted by Harsh J <ha...@cloudera.com>.
That file is still in an 'open' state. Running the below may show it up:

/opt/local/hadoop/bin/hadoop fsck
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -openforwrite

On Thu, Dec 12, 2013 at 9:22 AM, chenchun <ch...@gmail.com> wrote:
> Nothing is still writing to it. I can't read that file.
> I'm using hadoop 1.0.1.
>
> $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:51390,
> remote=/10.64.32.14:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:41277,
> remote=/10.64.32.36:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block
> blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> live nodes contain current block. Will get new block locations from namenode
> and retry...
> 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:51403,
> remote=/10.64.32.14:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:41290,
> remote=/10.64.32.36:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block
> blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> live nodes contain current block. Will get new block locations from namenode
> and retry...
>
> --
> chenchun
>
> On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:
>
> Is something still writing to it?
> ...
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
>
>
>
> On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <ka...@gmail.com> wrote:
>
> i have never seen something like that.
>
> Can you read that file?
>
> $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
>
> 2013/12/11 chenchun <ch...@gmail.com>
>
>
> Hi,
> I find some files on hdfs which command “hadoop fs -ls” tells they are not
> empty. But command “fsck” tells that these files have no replications. Is
> it normal?
>
> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> Found 1 items
> -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
> $ /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> -racks
> FSCK started by sankuai from /10.64.10.102 for path
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> 2013
> Status: HEALTHY
> Total size: 0 B (Total open files size: 1123927 B)
> Total dirs: 0
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> Minimally replicated blocks: 0
> Over-replicated blocks: 0
> Under-replicated blocks: 0
> Mis-replicated blocks: 0
> Default replication factor: 3
> Average block replication: 0.0
> Corrupt blocks: 0
> Missing replicas: 0
> Number of data-nodes: 38
> Number of racks: 6
> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>
>
> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> is HEALTHY
>
> --
> chenchun
>
>



-- 
Harsh J

Re: empty file

Posted by Harsh J <ha...@cloudera.com>.
That file is still in an 'open' state. Running the below may show it up:

/opt/local/hadoop/bin/hadoop fsck
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -openforwrite

On Thu, Dec 12, 2013 at 9:22 AM, chenchun <ch...@gmail.com> wrote:
> Nothing is still writing to it. I can't read that file.
> I'm using hadoop 1.0.1.
>
> $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:51390,
> remote=/10.64.32.14:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:41277,
> remote=/10.64.32.36:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block
> blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> live nodes contain current block. Will get new block locations from namenode
> and retry...
> 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:51403,
> remote=/10.64.32.14:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:41290,
> remote=/10.64.32.36:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block
> blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> live nodes contain current block. Will get new block locations from namenode
> and retry...
>
> --
> chenchun
>
> On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:
>
> Is something still writing to it?
> ...
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
>
>
>
> On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <ka...@gmail.com> wrote:
>
> i have never seen something like that.
>
> Can you read that file?
>
> $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
>
> 2013/12/11 chenchun <ch...@gmail.com>
>
>
> Hi,
> I find some files on hdfs which command “hadoop fs -ls” tells they are not
> empty. But command “fsck” tells that these files have no replications. Is
> it normal?
>
> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> Found 1 items
> -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
> $ /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> -racks
> FSCK started by sankuai from /10.64.10.102 for path
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> 2013
> Status: HEALTHY
> Total size: 0 B (Total open files size: 1123927 B)
> Total dirs: 0
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> Minimally replicated blocks: 0
> Over-replicated blocks: 0
> Under-replicated blocks: 0
> Mis-replicated blocks: 0
> Default replication factor: 3
> Average block replication: 0.0
> Corrupt blocks: 0
> Missing replicas: 0
> Number of data-nodes: 38
> Number of racks: 6
> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>
>
> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> is HEALTHY
>
> --
> chenchun
>
>



-- 
Harsh J

Re: empty file

Posted by Harsh J <ha...@cloudera.com>.
That file is still in an 'open' state. Running the below may show it up:

/opt/local/hadoop/bin/hadoop fsck
/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -openforwrite

On Thu, Dec 12, 2013 at 9:22 AM, chenchun <ch...@gmail.com> wrote:
> Nothing is still writing to it. I can't read that file.
> I'm using hadoop 1.0.1.
>
> $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:51390,
> remote=/10.64.32.14:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:41277,
> remote=/10.64.32.36:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block
> blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> live nodes contain current block. Will get new block locations from namenode
> and retry...
> 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:51403,
> remote=/10.64.32.14:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to
> /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got
> error for OP_READ_BLOCK, self=/10.64.10.102:41290,
> remote=/10.64.32.36:50010, for file
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block
> -5402857470524491959_58312275
> 13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block
> blk_-5402857470524491959_58312275 from any node: java.io.IOException: No
> live nodes contain current block. Will get new block locations from namenode
> and retry...
>
> --
> chenchun
>
> On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:
>
> Is something still writing to it?
> ...
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
>
>
>
> On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <ka...@gmail.com> wrote:
>
> i have never seen something like that.
>
> Can you read that file?
>
> $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
>
> 2013/12/11 chenchun <ch...@gmail.com>
>
>
> Hi,
> I find some files on hdfs which command “hadoop fs -ls” tells they are not
> empty. But command “fsck” tells that these files have no replications. Is
> it normal?
>
> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> Found 1 items
> -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
> $ /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> -racks
> FSCK started by sankuai from /10.64.10.102 for path
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> 2013
> Status: HEALTHY
> Total size: 0 B (Total open files size: 1123927 B)
> Total dirs: 0
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> Minimally replicated blocks: 0
> Over-replicated blocks: 0
> Under-replicated blocks: 0
> Mis-replicated blocks: 0
> Default replication factor: 3
> Average block replication: 0.0
> Corrupt blocks: 0
> Missing replicas: 0
> Number of data-nodes: 38
> Number of racks: 6
> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>
>
> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> is HEALTHY
>
> --
> chenchun
>
>



-- 
Harsh J

Re: empty file

Posted by chenchun <ch...@gmail.com>.
Nothing is still writing to it. I can't read that file.
I'm using hadoop 1.0.1.  

$ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:51390, remote=/10.64.32.14:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:41277, remote=/10.64.32.36:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block blk_-5402857470524491959_58312275 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:51403, remote=/10.64.32.14:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:41290, remote=/10.64.32.36:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block blk_-5402857470524491959_58312275 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...


--  
chenchun


On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:

> Is something still writing to it?
> ...
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
>  
>  
>  
> On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <kawa.adam@gmail.com (mailto:kawa.adam@gmail.com)> wrote:
> > i have never seen something like that.
> >  
> > Can you read that file?
> >  
> > $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> >  
> >  
> > 2013/12/11 chenchun <chenchun.feed@gmail.com (mailto:chenchun.feed@gmail.com)>
> > >  
> > > Hi,
> > > I find some files on hdfs which command “hadoop fs -ls” tells they are not
> > > empty. But command “fsck” tells that these files have no replications. Is
> > > it normal?
> > >  
> > > $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> > > Found 1 items
> > > -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> > > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> > >  
> > > $ /opt/local/hadoop/bin/hadoop fsck
> > > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> > > -racks
> > > FSCK started by sankuai from /10.64.10.102 for path
> > > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> > > 2013
> > > Status: HEALTHY
> > > Total size: 0 B (Total open files size: 1123927 B)
> > > Total dirs: 0
> > > Total files: 0 (Files currently being written: 1)
> > > Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> > > Minimally replicated blocks: 0
> > > Over-replicated blocks: 0
> > > Under-replicated blocks: 0
> > > Mis-replicated blocks: 0
> > > Default replication factor: 3
> > > Average block replication: 0.0
> > > Corrupt blocks: 0
> > > Missing replicas: 0
> > > Number of data-nodes: 38
> > > Number of racks: 6
> > > FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
> > >  
> > >  
> > > The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> > > is HEALTHY
> > >  
> > > --
> > > chenchun
> > >  
> >  
> >  
>  
>  
>  



Re: empty file

Posted by chenchun <ch...@gmail.com>.
Nothing is still writing to it. I can't read that file.
I'm using hadoop 1.0.1.  

$ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:51390, remote=/10.64.32.14:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:41277, remote=/10.64.32.36:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block blk_-5402857470524491959_58312275 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:51403, remote=/10.64.32.14:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:41290, remote=/10.64.32.36:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block blk_-5402857470524491959_58312275 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...


--  
chenchun


On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:

> Is something still writing to it?
> ...
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
>  
>  
>  
> On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <kawa.adam@gmail.com (mailto:kawa.adam@gmail.com)> wrote:
> > i have never seen something like that.
> >  
> > Can you read that file?
> >  
> > $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> >  
> >  
> > 2013/12/11 chenchun <chenchun.feed@gmail.com (mailto:chenchun.feed@gmail.com)>
> > >  
> > > Hi,
> > > I find some files on hdfs which command “hadoop fs -ls” tells they are not
> > > empty. But command “fsck” tells that these files have no replications. Is
> > > it normal?
> > >  
> > > $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> > > Found 1 items
> > > -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> > > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> > >  
> > > $ /opt/local/hadoop/bin/hadoop fsck
> > > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> > > -racks
> > > FSCK started by sankuai from /10.64.10.102 for path
> > > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> > > 2013
> > > Status: HEALTHY
> > > Total size: 0 B (Total open files size: 1123927 B)
> > > Total dirs: 0
> > > Total files: 0 (Files currently being written: 1)
> > > Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> > > Minimally replicated blocks: 0
> > > Over-replicated blocks: 0
> > > Under-replicated blocks: 0
> > > Mis-replicated blocks: 0
> > > Default replication factor: 3
> > > Average block replication: 0.0
> > > Corrupt blocks: 0
> > > Missing replicas: 0
> > > Number of data-nodes: 38
> > > Number of racks: 6
> > > FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
> > >  
> > >  
> > > The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> > > is HEALTHY
> > >  
> > > --
> > > chenchun
> > >  
> >  
> >  
>  
>  
>  



Re: empty file

Posted by chenchun <ch...@gmail.com>.
Nothing is still writing to it. I can't read that file.
I'm using hadoop 1.0.1.  

$ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:51390, remote=/10.64.32.14:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:41277, remote=/10.64.32.36:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block blk_-5402857470524491959_58312275 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:51403, remote=/10.64.32.14:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:41290, remote=/10.64.32.36:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block blk_-5402857470524491959_58312275 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...


--  
chenchun


On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:

> Is something still writing to it?
> ...
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
>  
>  
>  
> On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <kawa.adam@gmail.com (mailto:kawa.adam@gmail.com)> wrote:
> > i have never seen something like that.
> >  
> > Can you read that file?
> >  
> > $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> >  
> >  
> > 2013/12/11 chenchun <chenchun.feed@gmail.com (mailto:chenchun.feed@gmail.com)>
> > >  
> > > Hi,
> > > I find some files on hdfs which command “hadoop fs -ls” tells they are not
> > > empty. But command “fsck” tells that these files have no replications. Is
> > > it normal?
> > >  
> > > $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> > > Found 1 items
> > > -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> > > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> > >  
> > > $ /opt/local/hadoop/bin/hadoop fsck
> > > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> > > -racks
> > > FSCK started by sankuai from /10.64.10.102 for path
> > > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> > > 2013
> > > Status: HEALTHY
> > > Total size: 0 B (Total open files size: 1123927 B)
> > > Total dirs: 0
> > > Total files: 0 (Files currently being written: 1)
> > > Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> > > Minimally replicated blocks: 0
> > > Over-replicated blocks: 0
> > > Under-replicated blocks: 0
> > > Mis-replicated blocks: 0
> > > Default replication factor: 3
> > > Average block replication: 0.0
> > > Corrupt blocks: 0
> > > Missing replicas: 0
> > > Number of data-nodes: 38
> > > Number of racks: 6
> > > FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
> > >  
> > >  
> > > The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> > > is HEALTHY
> > >  
> > > --
> > > chenchun
> > >  
> >  
> >  
>  
>  
>  



Re: empty file

Posted by chenchun <ch...@gmail.com>.
Nothing is still writing to it. I can't read that file.
I'm using hadoop 1.0.1.  

$ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:51390, remote=/10.64.32.14:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:56 WARN hdfs.DFSClient: Failed to connect to /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:41277, remote=/10.64.32.36:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:56 INFO hdfs.DFSClient: Could not obtain block blk_-5402857470524491959_58312275 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...
13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to /10.64.32.14:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:51403, remote=/10.64.32.14:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:59 WARN hdfs.DFSClient: Failed to connect to /10.64.32.36:50010, add to deadNodes and continuejava.io.IOException: Got error for OP_READ_BLOCK, self=/10.64.10.102:41290, remote=/10.64.32.36:50010, for file /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo, for block -5402857470524491959_58312275
13/12/12 11:48:59 INFO hdfs.DFSClient: Could not obtain block blk_-5402857470524491959_58312275 from any node: java.io.IOException: No live nodes contain current block. Will get new block locations from namenode and retry...


--  
chenchun


On Thursday, 12 December, 2013 at 5:51 AM, John Meagher wrote:

> Is something still writing to it?
> ...
> Total files: 0 (Files currently being written: 1)
> Total blocks (validated): 0 (Total open file blocks (not validated): 1)
>  
>  
>  
> On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <kawa.adam@gmail.com (mailto:kawa.adam@gmail.com)> wrote:
> > i have never seen something like that.
> >  
> > Can you read that file?
> >  
> > $ hadoop fs -text /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> >  
> >  
> > 2013/12/11 chenchun <chenchun.feed@gmail.com (mailto:chenchun.feed@gmail.com)>
> > >  
> > > Hi,
> > > I find some files on hdfs which command “hadoop fs -ls” tells they are not
> > > empty. But command “fsck” tells that these files have no replications. Is
> > > it normal?
> > >  
> > > $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> > > Found 1 items
> > > -rw-r--r-- 3 sankuai supergroup 1123927 2013-12-06 03:22
> > > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
> > >  
> > > $ /opt/local/hadoop/bin/hadoop fsck
> > > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo -files -blocks -locations
> > > -racks
> > > FSCK started by sankuai from /10.64.10.102 for path
> > > /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> > > 2013
> > > Status: HEALTHY
> > > Total size: 0 B (Total open files size: 1123927 B)
> > > Total dirs: 0
> > > Total files: 0 (Files currently being written: 1)
> > > Total blocks (validated): 0 (Total open file blocks (not validated): 1)
> > > Minimally replicated blocks: 0
> > > Over-replicated blocks: 0
> > > Under-replicated blocks: 0
> > > Mis-replicated blocks: 0
> > > Default replication factor: 3
> > > Average block replication: 0.0
> > > Corrupt blocks: 0
> > > Missing replicas: 0
> > > Number of data-nodes: 38
> > > Number of racks: 6
> > > FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
> > >  
> > >  
> > > The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> > > is HEALTHY
> > >  
> > > --
> > > chenchun
> > >  
> >  
> >  
>  
>  
>  



Re: empty file

Posted by John Meagher <jo...@gmail.com>.
Is something still writing to it?
...
 Total files:   0 (Files currently being written: 1)
 Total blocks (validated):   0 (Total open file blocks (not validated): 1)



On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <ka...@gmail.com> wrote:
> i have never seen something like that.
>
> Can you read that file?
>
> $ hadoop fs -text  /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
>
> 2013/12/11 chenchun <ch...@gmail.com>
>>
>> Hi,
>> I find some files on hdfs which command “hadoop fs -ls” tells they are not
>> empty. But command “fsck” tells that  these files have no replications. Is
>> it normal?
>>
>> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
>> Found 1 items
>> -rw-r--r--   3 sankuai supergroup    1123927 2013-12-06 03:22
>> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>>
>> $  /opt/local/hadoop/bin/hadoop fsck
>> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -files -blocks -locations
>> -racks
>> FSCK started by sankuai from /10.64.10.102 for path
>> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
>> 2013
>> Status: HEALTHY
>>  Total size:    0 B (Total open files size: 1123927 B)
>>  Total dirs:    0
>>  Total files:   0 (Files currently being written: 1)
>>  Total blocks (validated):   0 (Total open file blocks (not validated): 1)
>>  Minimally replicated blocks:   0
>>  Over-replicated blocks:     0
>>  Under-replicated blocks:    0
>>  Mis-replicated blocks:      0
>>  Default replication factor: 3
>>  Average block replication:  0.0
>>  Corrupt blocks:             0
>>  Missing replicas:           0
>>  Number of data-nodes:       38
>>  Number of racks:            6
>> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>>
>>
>> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
>> is HEALTHY
>>
>> --
>> chenchun
>>
>

Re: empty file

Posted by John Meagher <jo...@gmail.com>.
Is something still writing to it?
...
 Total files:   0 (Files currently being written: 1)
 Total blocks (validated):   0 (Total open file blocks (not validated): 1)



On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <ka...@gmail.com> wrote:
> i have never seen something like that.
>
> Can you read that file?
>
> $ hadoop fs -text  /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
>
> 2013/12/11 chenchun <ch...@gmail.com>
>>
>> Hi,
>> I find some files on hdfs which command “hadoop fs -ls” tells they are not
>> empty. But command “fsck” tells that  these files have no replications. Is
>> it normal?
>>
>> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
>> Found 1 items
>> -rw-r--r--   3 sankuai supergroup    1123927 2013-12-06 03:22
>> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>>
>> $  /opt/local/hadoop/bin/hadoop fsck
>> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -files -blocks -locations
>> -racks
>> FSCK started by sankuai from /10.64.10.102 for path
>> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
>> 2013
>> Status: HEALTHY
>>  Total size:    0 B (Total open files size: 1123927 B)
>>  Total dirs:    0
>>  Total files:   0 (Files currently being written: 1)
>>  Total blocks (validated):   0 (Total open file blocks (not validated): 1)
>>  Minimally replicated blocks:   0
>>  Over-replicated blocks:     0
>>  Under-replicated blocks:    0
>>  Mis-replicated blocks:      0
>>  Default replication factor: 3
>>  Average block replication:  0.0
>>  Corrupt blocks:             0
>>  Missing replicas:           0
>>  Number of data-nodes:       38
>>  Number of racks:            6
>> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>>
>>
>> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
>> is HEALTHY
>>
>> --
>> chenchun
>>
>

Re: empty file

Posted by John Meagher <jo...@gmail.com>.
Is something still writing to it?
...
 Total files:   0 (Files currently being written: 1)
 Total blocks (validated):   0 (Total open file blocks (not validated): 1)



On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <ka...@gmail.com> wrote:
> i have never seen something like that.
>
> Can you read that file?
>
> $ hadoop fs -text  /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
>
> 2013/12/11 chenchun <ch...@gmail.com>
>>
>> Hi,
>> I find some files on hdfs which command “hadoop fs -ls” tells they are not
>> empty. But command “fsck” tells that  these files have no replications. Is
>> it normal?
>>
>> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
>> Found 1 items
>> -rw-r--r--   3 sankuai supergroup    1123927 2013-12-06 03:22
>> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>>
>> $  /opt/local/hadoop/bin/hadoop fsck
>> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -files -blocks -locations
>> -racks
>> FSCK started by sankuai from /10.64.10.102 for path
>> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
>> 2013
>> Status: HEALTHY
>>  Total size:    0 B (Total open files size: 1123927 B)
>>  Total dirs:    0
>>  Total files:   0 (Files currently being written: 1)
>>  Total blocks (validated):   0 (Total open file blocks (not validated): 1)
>>  Minimally replicated blocks:   0
>>  Over-replicated blocks:     0
>>  Under-replicated blocks:    0
>>  Mis-replicated blocks:      0
>>  Default replication factor: 3
>>  Average block replication:  0.0
>>  Corrupt blocks:             0
>>  Missing replicas:           0
>>  Number of data-nodes:       38
>>  Number of racks:            6
>> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>>
>>
>> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
>> is HEALTHY
>>
>> --
>> chenchun
>>
>

Re: empty file

Posted by John Meagher <jo...@gmail.com>.
Is something still writing to it?
...
 Total files:   0 (Files currently being written: 1)
 Total blocks (validated):   0 (Total open file blocks (not validated): 1)



On Wed, Dec 11, 2013 at 2:37 PM, Adam Kawa <ka...@gmail.com> wrote:
> i have never seen something like that.
>
> Can you read that file?
>
> $ hadoop fs -text  /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
>
> 2013/12/11 chenchun <ch...@gmail.com>
>>
>> Hi,
>> I find some files on hdfs which command “hadoop fs -ls” tells they are not
>> empty. But command “fsck” tells that  these files have no replications. Is
>> it normal?
>>
>> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
>> Found 1 items
>> -rw-r--r--   3 sankuai supergroup    1123927 2013-12-06 03:22
>> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>>
>> $  /opt/local/hadoop/bin/hadoop fsck
>> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -files -blocks -locations
>> -racks
>> FSCK started by sankuai from /10.64.10.102 for path
>> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
>> 2013
>> Status: HEALTHY
>>  Total size:    0 B (Total open files size: 1123927 B)
>>  Total dirs:    0
>>  Total files:   0 (Files currently being written: 1)
>>  Total blocks (validated):   0 (Total open file blocks (not validated): 1)
>>  Minimally replicated blocks:   0
>>  Over-replicated blocks:     0
>>  Under-replicated blocks:    0
>>  Mis-replicated blocks:      0
>>  Default replication factor: 3
>>  Average block replication:  0.0
>>  Corrupt blocks:             0
>>  Missing replicas:           0
>>  Number of data-nodes:       38
>>  Number of racks:            6
>> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>>
>>
>> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
>> is HEALTHY
>>
>> --
>> chenchun
>>
>

Re: empty file

Posted by Adam Kawa <ka...@gmail.com>.
i have never seen something like that.

Can you read that file?

$ hadoop fs -text  /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo


2013/12/11 chenchun <ch...@gmail.com>

> Hi,
> I find some files on hdfs which command “hadoop fs -ls” tells they are not
> empty. But command “fsck” tells that  these files have no replications. Is
> it normal?
>
> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> Found 1 items
> -rw-r--r--   3 sankuai supergroup    1123927 2013-12-06 03:22
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
> $  /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -files -blocks -locations
> -racks
> FSCK started by sankuai from /10.64.10.102 for path
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> 2013
> Status: HEALTHY
>  Total size:    0 B (Total open files size: 1123927 B)
>  Total dirs:    0
>  Total files:   0 (Files currently being written: 1)
>  Total blocks (validated):   0 (Total open file blocks (not validated): 1)
>  Minimally replicated blocks:   0
>  Over-replicated blocks:     0
>  Under-replicated blocks:    0
>  Mis-replicated blocks:      0
>  Default replication factor: 3
>  Average block replication:  0.0
>  Corrupt blocks:             0
>  Missing replicas:           0
>  Number of data-nodes:       38
>  Number of racks:            6
> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>
>
> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> is HEALTHY
>
> --
> chenchun
>
>

Re: empty file

Posted by Adam Kawa <ka...@gmail.com>.
i have never seen something like that.

Can you read that file?

$ hadoop fs -text  /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo


2013/12/11 chenchun <ch...@gmail.com>

> Hi,
> I find some files on hdfs which command “hadoop fs -ls” tells they are not
> empty. But command “fsck” tells that  these files have no replications. Is
> it normal?
>
> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> Found 1 items
> -rw-r--r--   3 sankuai supergroup    1123927 2013-12-06 03:22
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
> $  /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -files -blocks -locations
> -racks
> FSCK started by sankuai from /10.64.10.102 for path
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> 2013
> Status: HEALTHY
>  Total size:    0 B (Total open files size: 1123927 B)
>  Total dirs:    0
>  Total files:   0 (Files currently being written: 1)
>  Total blocks (validated):   0 (Total open file blocks (not validated): 1)
>  Minimally replicated blocks:   0
>  Over-replicated blocks:     0
>  Under-replicated blocks:    0
>  Mis-replicated blocks:      0
>  Default replication factor: 3
>  Average block replication:  0.0
>  Corrupt blocks:             0
>  Missing replicas:           0
>  Number of data-nodes:       38
>  Number of racks:            6
> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>
>
> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> is HEALTHY
>
> --
> chenchun
>
>

Re: empty file

Posted by Adam Kawa <ka...@gmail.com>.
i have never seen something like that.

Can you read that file?

$ hadoop fs -text  /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo


2013/12/11 chenchun <ch...@gmail.com>

> Hi,
> I find some files on hdfs which command “hadoop fs -ls” tells they are not
> empty. But command “fsck” tells that  these files have no replications. Is
> it normal?
>
> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> Found 1 items
> -rw-r--r--   3 sankuai supergroup    1123927 2013-12-06 03:22
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
> $  /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -files -blocks -locations
> -racks
> FSCK started by sankuai from /10.64.10.102 for path
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> 2013
> Status: HEALTHY
>  Total size:    0 B (Total open files size: 1123927 B)
>  Total dirs:    0
>  Total files:   0 (Files currently being written: 1)
>  Total blocks (validated):   0 (Total open file blocks (not validated): 1)
>  Minimally replicated blocks:   0
>  Over-replicated blocks:     0
>  Under-replicated blocks:    0
>  Mis-replicated blocks:      0
>  Default replication factor: 3
>  Average block replication:  0.0
>  Corrupt blocks:             0
>  Missing replicas:           0
>  Number of data-nodes:       38
>  Number of racks:            6
> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>
>
> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> is HEALTHY
>
> --
> chenchun
>
>

Re: empty file

Posted by Adam Kawa <ka...@gmail.com>.
i have never seen something like that.

Can you read that file?

$ hadoop fs -text  /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo


2013/12/11 chenchun <ch...@gmail.com>

> Hi,
> I find some files on hdfs which command “hadoop fs -ls” tells they are not
> empty. But command “fsck” tells that  these files have no replications. Is
> it normal?
>
> $ hadoop fs -ls /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo;
> Found 1 items
> -rw-r--r--   3 sankuai supergroup    1123927 2013-12-06 03:22
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo
>
> $  /opt/local/hadoop/bin/hadoop fsck
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo  -files -blocks -locations
> -racks
> FSCK started by sankuai from /10.64.10.102 for path
> /tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo at Wed Dec 11 20:35:30 CST
> 2013
> Status: HEALTHY
>  Total size:    0 B (Total open files size: 1123927 B)
>  Total dirs:    0
>  Total files:   0 (Files currently being written: 1)
>  Total blocks (validated):   0 (Total open file blocks (not validated): 1)
>  Minimally replicated blocks:   0
>  Over-replicated blocks:     0
>  Under-replicated blocks:    0
>  Mis-replicated blocks:      0
>  Default replication factor: 3
>  Average block replication:  0.0
>  Corrupt blocks:             0
>  Missing replicas:           0
>  Number of data-nodes:       38
>  Number of racks:            6
> FSCK ended at Wed Dec 11 20:35:30 CST 2013 in 1 milliseconds
>
>
> The filesystem under path '/tmp/corrupt_lzo/lc_hadoop16.1386270004881.lzo'
> is HEALTHY
>
> --
> chenchun
>
>