You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by cho ju il <tj...@kgrid.co.kr> on 2014/11/07 03:26:15 UTC

How can I be able to balance the disk?

My Hadoop Cluster Version
Hadoop 1.1.2, Hadoop 2.4.1
 
The disk usage of a datanode is unbalanced. 
I guess that the cause is due to the amount of 100% of disk problems. 
example ) 
/disk01 100%
/disk02 45%
/disk03 70%
 
My guess is that correct? 
If so, How can I be able to balance the disk? 
 
**** upload application, hadoop client log
java.io.IOException: All datanodes [server:port] are bad. Aborting... 
 
 
**** datanode
2014-11-01 17:47:02,820  DataStreamer Exception: java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcher.write0(Native Method)
        at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
        at sun.nio.ch.IOUtil.write(IOUtil.java:40)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
        at org.kapache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
        at org.kapache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
        at org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
        at org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
        at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
        at java.io.DataOutputStream.write(DataOutputStream.java:90)
        at org.kapache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3083)
 
2014-11-01 17:47:02,821  Error Recovery for blk_-7118739414552476963_15341530 bad datanode[0] [server:port]

Re: How can I be able to balance the disk?

Posted by hadoop hive <ha...@gmail.com>.
Hey,

1. Stop datanode
2. Copy blocks from 1 disk to another on same path
3. Run fsck so that can be updated in metastore

You can find steps at www.bigdataboard.in

Thanks
Vikas srivastava
On Nov 7, 2014 7:56 AM, "cho ju il" <tj...@kgrid.co.kr> wrote:

> My Hadoop Cluster Version
>
> Hadoop 1.1.2, Hadoop 2.4.1
>
>
>
> The disk usage of a datanode is unbalanced.
>
> I guess that the cause is due to the amount of 100% of disk problems.
>
> example )
>
> /disk01 100%
>
> /disk02 45%
>
> /disk03 70%
>
>
>
> My guess is that correct?
>
> If so, How can I be able to balance the disk?
>
>
>
> **** upload application, hadoop client log
>
> java.io.IOException: All datanodes [server:port] are bad. Aborting...
>
>
>
>
>
> **** datanode
>
> 2014-11-01 17:47:02,820  DataStreamer Exception: java.io.IOException:
> Connection reset by peer
>
>         at sun.nio.ch.FileDispatcher.write0(Native Method)
>
>         at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
>
>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
>
>         at sun.nio.ch.IOUtil.write(IOUtil.java:40)
>
>         at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
>
>         at
> org.kapache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>
>         at
> org.kapache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>
>         at
> org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>
>         at
> org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>
>         at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
>
>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>
>         at
> org.kapache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3083)
>
>
>
> 2014-11-01 17:47:02,821  Error Recovery for
> blk_-7118739414552476963_15341530 bad datanode[0] [server:port]
>

Re: How can I be able to balance the disk?

Posted by hadoop hive <ha...@gmail.com>.
Hey,

1. Stop datanode
2. Copy blocks from 1 disk to another on same path
3. Run fsck so that can be updated in metastore

You can find steps at www.bigdataboard.in

Thanks
Vikas srivastava
On Nov 7, 2014 7:56 AM, "cho ju il" <tj...@kgrid.co.kr> wrote:

> My Hadoop Cluster Version
>
> Hadoop 1.1.2, Hadoop 2.4.1
>
>
>
> The disk usage of a datanode is unbalanced.
>
> I guess that the cause is due to the amount of 100% of disk problems.
>
> example )
>
> /disk01 100%
>
> /disk02 45%
>
> /disk03 70%
>
>
>
> My guess is that correct?
>
> If so, How can I be able to balance the disk?
>
>
>
> **** upload application, hadoop client log
>
> java.io.IOException: All datanodes [server:port] are bad. Aborting...
>
>
>
>
>
> **** datanode
>
> 2014-11-01 17:47:02,820  DataStreamer Exception: java.io.IOException:
> Connection reset by peer
>
>         at sun.nio.ch.FileDispatcher.write0(Native Method)
>
>         at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
>
>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
>
>         at sun.nio.ch.IOUtil.write(IOUtil.java:40)
>
>         at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
>
>         at
> org.kapache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>
>         at
> org.kapache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>
>         at
> org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>
>         at
> org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>
>         at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
>
>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>
>         at
> org.kapache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3083)
>
>
>
> 2014-11-01 17:47:02,821  Error Recovery for
> blk_-7118739414552476963_15341530 bad datanode[0] [server:port]
>

Re: How can I be able to balance the disk?

Posted by hadoop hive <ha...@gmail.com>.
Hey,

1. Stop datanode
2. Copy blocks from 1 disk to another on same path
3. Run fsck so that can be updated in metastore

You can find steps at www.bigdataboard.in

Thanks
Vikas srivastava
On Nov 7, 2014 7:56 AM, "cho ju il" <tj...@kgrid.co.kr> wrote:

> My Hadoop Cluster Version
>
> Hadoop 1.1.2, Hadoop 2.4.1
>
>
>
> The disk usage of a datanode is unbalanced.
>
> I guess that the cause is due to the amount of 100% of disk problems.
>
> example )
>
> /disk01 100%
>
> /disk02 45%
>
> /disk03 70%
>
>
>
> My guess is that correct?
>
> If so, How can I be able to balance the disk?
>
>
>
> **** upload application, hadoop client log
>
> java.io.IOException: All datanodes [server:port] are bad. Aborting...
>
>
>
>
>
> **** datanode
>
> 2014-11-01 17:47:02,820  DataStreamer Exception: java.io.IOException:
> Connection reset by peer
>
>         at sun.nio.ch.FileDispatcher.write0(Native Method)
>
>         at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
>
>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
>
>         at sun.nio.ch.IOUtil.write(IOUtil.java:40)
>
>         at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
>
>         at
> org.kapache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>
>         at
> org.kapache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>
>         at
> org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>
>         at
> org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>
>         at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
>
>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>
>         at
> org.kapache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3083)
>
>
>
> 2014-11-01 17:47:02,821  Error Recovery for
> blk_-7118739414552476963_15341530 bad datanode[0] [server:port]
>

Re: How can I be able to balance the disk?

Posted by hadoop hive <ha...@gmail.com>.
Hey,

1. Stop datanode
2. Copy blocks from 1 disk to another on same path
3. Run fsck so that can be updated in metastore

You can find steps at www.bigdataboard.in

Thanks
Vikas srivastava
On Nov 7, 2014 7:56 AM, "cho ju il" <tj...@kgrid.co.kr> wrote:

> My Hadoop Cluster Version
>
> Hadoop 1.1.2, Hadoop 2.4.1
>
>
>
> The disk usage of a datanode is unbalanced.
>
> I guess that the cause is due to the amount of 100% of disk problems.
>
> example )
>
> /disk01 100%
>
> /disk02 45%
>
> /disk03 70%
>
>
>
> My guess is that correct?
>
> If so, How can I be able to balance the disk?
>
>
>
> **** upload application, hadoop client log
>
> java.io.IOException: All datanodes [server:port] are bad. Aborting...
>
>
>
>
>
> **** datanode
>
> 2014-11-01 17:47:02,820  DataStreamer Exception: java.io.IOException:
> Connection reset by peer
>
>         at sun.nio.ch.FileDispatcher.write0(Native Method)
>
>         at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
>
>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
>
>         at sun.nio.ch.IOUtil.write(IOUtil.java:40)
>
>         at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
>
>         at
> org.kapache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
>
>         at
> org.kapache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
>
>         at
> org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
>
>         at
> org.kapache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
>
>         at
> java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
>
>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
>
>         at
> org.kapache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:3083)
>
>
>
> 2014-11-01 17:47:02,821  Error Recovery for
> blk_-7118739414552476963_15341530 bad datanode[0] [server:port]
>