You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by Akmal Abbasov <ak...@icloud.com> on 2015/07/28 18:41:36 UTC

HDFS datanode used space is increasing without any writes

Hi, I’m observing strange behaviour in HDFS/HBase cluster.
The disk space of one of datanodes is increasing very fast even when there are no write requests. 
It is 8GB per hour in average. Here is the graph which shows it.

I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.

And this is logs from the node
2015-07-28 15:40:38,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration: 17759797
2015-07-28 15:41:15,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /10.0.0.21:60540 dest: /10.32.1.12:50010
2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration: 3600203675041
2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2015-07-28 15:50:40,745 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:6099ms (threshold=300ms)
2015-07-28 15:59:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0
2015-07-28 16:00:16,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /10.32.1.12:36998 dest: /10.32.1.12:50010
2015-07-28 16:00:17,469 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration: 3601152263901
2015-07-28 16:00:17,472 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2015-07-28 16:03:44,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /10.0.0.19:35851 dest: /10.32.1.12:50010
2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration: 3600482062810
2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2015-07-28 16:11:10,961 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:7884ms (threshold=300ms)
2015-07-28 16:11:14,122 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4362ms (threshold=300ms)
2015-07-28 16:11:14,123 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
2015-07-28 16:13:29,968 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:659ms (threshold=300ms)
2015-07-28 16:18:33,336 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /10.0.0.20:41527 dest: /10.32.1.12:50010
2015-07-28 16:18:38,926 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:1767ms (threshold=300ms)
2015-07-28 16:28:40,580 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4099ms (threshold=300ms)

What could be the cause of this?
Thank you.



Re: HDFS datanode used space is increasing without any writes

Posted by Sandeep Nemuri <nh...@gmail.com>.
What is the size of your Hbase table?

A copy of snapshot will be stored in archive directory.

hadoop fs -du -s -h /apps/hbase/data/data/default/<table-name>
hadoop fs -du -s -h /apps/hbase/data/archive/data/default/<table-name>

Check this directory size.

Thanks
Sandeep Nemuri
ᐧ

On Thu, Jul 30, 2015 at 3:36 PM, Akmal Abbasov <ak...@icloud.com>
wrote:

> I am running HBase snapshot exporting, but I stopped it, and still the
> capacity used is increasing.
> Here you can see that it is increased to 60 GB, and mostly it is because
> of 1 detanode.
> Moreover I am checking the directories sizes using bin/hdfs dfs -du -h /
> and while the HDFS used capacity is increasing, the directories sizes in
> HDFS is not changing.
> Any ideas?
>
> p.s. I started a hdfs balancer several days ago, then stopped it after
> running for about 10 minutes, after reading that it is not a good idea to
> run it on the same cluster with HBase.
> Could it be because of this?
>
> Thank you.
>
>
> On 28 Jul 2015, at 19:08, Harshit Mathur <ma...@gmail.com> wrote:
>
> Are there any map reduce jobs running?
> On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <ak...@icloud.com>
> wrote:
>
>> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
>> The disk space of one of datanodes is increasing very fast even when
>> there are no write requests.
>> It is 8GB per hour in average. Here is the graph which shows it.
>> <screenshot.png>
>> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
>>
>> And this is logs from the node
>> 2015-07-28 15:40:38,795 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ,
>> cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID:
>> 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration:
>> 17759797
>> 2015-07-28 15:41:15,111 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /
>> 10.0.0.21:60540 dest: /10.32.1.12:50010
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration:
>> 3600203675041
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 15:50:40,745 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:6099ms (threshold=300ms)
>> 2015-07-28 15:59:21,130 INFO
>> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
>> BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing
>> metadata files:0, missing block files:0, missing blocks in memory:0,
>> mismatched blocks:0
>> 2015-07-28 16:00:16,770 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /
>> 10.32.1.12:36998 dest: /10.32.1.12:50010
>> 2015-07-28 16:00:17,469 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration:
>> 3601152263901
>> 2015-07-28 16:00:17,472 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 16:03:44,011 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /
>> 10.0.0.19:35851 dest: /10.32.1.12:50010
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration:
>> 3600482062810
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443,
>> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2015-07-28 16:11:10,961 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:7884ms (threshold=300ms)
>> 2015-07-28 16:11:14,122 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4362ms (threshold=300ms)
>> 2015-07-28 16:11:14,123 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took
>> 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
>> 2015-07-28 16:13:29,968 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:659ms (threshold=300ms)
>> 2015-07-28 16:18:33,336 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /
>> 10.0.0.20:41527 dest: /10.32.1.12:50010
>> 2015-07-28 16:18:38,926 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:1767ms (threshold=300ms)
>> 2015-07-28 16:28:40,580 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4099ms (threshold=300ms)
>>
>> What could be the cause of this?
>> Thank you.
>>
>>
>>
>


-- 
*  Regards*
*  Sandeep Nemuri*

Re: HDFS datanode used space is increasing without any writes

Posted by Ted Yu <yu...@gmail.com>.
Please take a look at HDFS-6133 which aims to help with hbase data locality.

It was integrated to hadoop 2.7.0 release.

FYI

On Thu, Jul 30, 2015 at 3:06 AM, Akmal Abbasov <ak...@icloud.com>
wrote:

> I am running HBase snapshot exporting, but I stopped it, and still the
> capacity used is increasing.
> Here you can see that it is increased to 60 GB, and mostly it is because
> of 1 detanode.
> Moreover I am checking the directories sizes using bin/hdfs dfs -du -h /
> and while the HDFS used capacity is increasing, the directories sizes in
> HDFS is not changing.
> Any ideas?
>
> p.s. I started a hdfs balancer several days ago, then stopped it after
> running for about 10 minutes, after reading that it is not a good idea to
> run it on the same cluster with HBase.
> Could it be because of this?
>
> Thank you.
>
>
> On 28 Jul 2015, at 19:08, Harshit Mathur <ma...@gmail.com> wrote:
>
> Are there any map reduce jobs running?
> On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <ak...@icloud.com>
> wrote:
>
>> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
>> The disk space of one of datanodes is increasing very fast even when
>> there are no write requests.
>> It is 8GB per hour in average. Here is the graph which shows it.
>> <screenshot.png>
>> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
>>
>> And this is logs from the node
>> 2015-07-28 15:40:38,795 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ,
>> cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID:
>> 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration:
>> 17759797
>> 2015-07-28 15:41:15,111 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /
>> 10.0.0.21:60540 dest: /10.32.1.12:50010
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration:
>> 3600203675041
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 15:50:40,745 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:6099ms (threshold=300ms)
>> 2015-07-28 15:59:21,130 INFO
>> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
>> BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing
>> metadata files:0, missing block files:0, missing blocks in memory:0,
>> mismatched blocks:0
>> 2015-07-28 16:00:16,770 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /
>> 10.32.1.12:36998 dest: /10.32.1.12:50010
>> 2015-07-28 16:00:17,469 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration:
>> 3601152263901
>> 2015-07-28 16:00:17,472 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 16:03:44,011 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /
>> 10.0.0.19:35851 dest: /10.32.1.12:50010
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration:
>> 3600482062810
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443,
>> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2015-07-28 16:11:10,961 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:7884ms (threshold=300ms)
>> 2015-07-28 16:11:14,122 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4362ms (threshold=300ms)
>> 2015-07-28 16:11:14,123 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took
>> 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
>> 2015-07-28 16:13:29,968 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:659ms (threshold=300ms)
>> 2015-07-28 16:18:33,336 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /
>> 10.0.0.20:41527 dest: /10.32.1.12:50010
>> 2015-07-28 16:18:38,926 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:1767ms (threshold=300ms)
>> 2015-07-28 16:28:40,580 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4099ms (threshold=300ms)
>>
>> What could be the cause of this?
>> Thank you.
>>
>>
>>
>

Re: HDFS datanode used space is increasing without any writes

Posted by Sandeep Nemuri <nh...@gmail.com>.
What is the size of your Hbase table?

A copy of snapshot will be stored in archive directory.

hadoop fs -du -s -h /apps/hbase/data/data/default/<table-name>
hadoop fs -du -s -h /apps/hbase/data/archive/data/default/<table-name>

Check this directory size.

Thanks
Sandeep Nemuri
ᐧ

On Thu, Jul 30, 2015 at 3:36 PM, Akmal Abbasov <ak...@icloud.com>
wrote:

> I am running HBase snapshot exporting, but I stopped it, and still the
> capacity used is increasing.
> Here you can see that it is increased to 60 GB, and mostly it is because
> of 1 detanode.
> Moreover I am checking the directories sizes using bin/hdfs dfs -du -h /
> and while the HDFS used capacity is increasing, the directories sizes in
> HDFS is not changing.
> Any ideas?
>
> p.s. I started a hdfs balancer several days ago, then stopped it after
> running for about 10 minutes, after reading that it is not a good idea to
> run it on the same cluster with HBase.
> Could it be because of this?
>
> Thank you.
>
>
> On 28 Jul 2015, at 19:08, Harshit Mathur <ma...@gmail.com> wrote:
>
> Are there any map reduce jobs running?
> On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <ak...@icloud.com>
> wrote:
>
>> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
>> The disk space of one of datanodes is increasing very fast even when
>> there are no write requests.
>> It is 8GB per hour in average. Here is the graph which shows it.
>> <screenshot.png>
>> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
>>
>> And this is logs from the node
>> 2015-07-28 15:40:38,795 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ,
>> cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID:
>> 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration:
>> 17759797
>> 2015-07-28 15:41:15,111 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /
>> 10.0.0.21:60540 dest: /10.32.1.12:50010
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration:
>> 3600203675041
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 15:50:40,745 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:6099ms (threshold=300ms)
>> 2015-07-28 15:59:21,130 INFO
>> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
>> BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing
>> metadata files:0, missing block files:0, missing blocks in memory:0,
>> mismatched blocks:0
>> 2015-07-28 16:00:16,770 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /
>> 10.32.1.12:36998 dest: /10.32.1.12:50010
>> 2015-07-28 16:00:17,469 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration:
>> 3601152263901
>> 2015-07-28 16:00:17,472 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 16:03:44,011 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /
>> 10.0.0.19:35851 dest: /10.32.1.12:50010
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration:
>> 3600482062810
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443,
>> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2015-07-28 16:11:10,961 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:7884ms (threshold=300ms)
>> 2015-07-28 16:11:14,122 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4362ms (threshold=300ms)
>> 2015-07-28 16:11:14,123 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took
>> 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
>> 2015-07-28 16:13:29,968 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:659ms (threshold=300ms)
>> 2015-07-28 16:18:33,336 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /
>> 10.0.0.20:41527 dest: /10.32.1.12:50010
>> 2015-07-28 16:18:38,926 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:1767ms (threshold=300ms)
>> 2015-07-28 16:28:40,580 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4099ms (threshold=300ms)
>>
>> What could be the cause of this?
>> Thank you.
>>
>>
>>
>


-- 
*  Regards*
*  Sandeep Nemuri*

Re: HDFS datanode used space is increasing without any writes

Posted by Sandeep Nemuri <nh...@gmail.com>.
What is the size of your Hbase table?

A copy of snapshot will be stored in archive directory.

hadoop fs -du -s -h /apps/hbase/data/data/default/<table-name>
hadoop fs -du -s -h /apps/hbase/data/archive/data/default/<table-name>

Check this directory size.

Thanks
Sandeep Nemuri
ᐧ

On Thu, Jul 30, 2015 at 3:36 PM, Akmal Abbasov <ak...@icloud.com>
wrote:

> I am running HBase snapshot exporting, but I stopped it, and still the
> capacity used is increasing.
> Here you can see that it is increased to 60 GB, and mostly it is because
> of 1 detanode.
> Moreover I am checking the directories sizes using bin/hdfs dfs -du -h /
> and while the HDFS used capacity is increasing, the directories sizes in
> HDFS is not changing.
> Any ideas?
>
> p.s. I started a hdfs balancer several days ago, then stopped it after
> running for about 10 minutes, after reading that it is not a good idea to
> run it on the same cluster with HBase.
> Could it be because of this?
>
> Thank you.
>
>
> On 28 Jul 2015, at 19:08, Harshit Mathur <ma...@gmail.com> wrote:
>
> Are there any map reduce jobs running?
> On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <ak...@icloud.com>
> wrote:
>
>> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
>> The disk space of one of datanodes is increasing very fast even when
>> there are no write requests.
>> It is 8GB per hour in average. Here is the graph which shows it.
>> <screenshot.png>
>> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
>>
>> And this is logs from the node
>> 2015-07-28 15:40:38,795 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ,
>> cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID:
>> 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration:
>> 17759797
>> 2015-07-28 15:41:15,111 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /
>> 10.0.0.21:60540 dest: /10.32.1.12:50010
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration:
>> 3600203675041
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 15:50:40,745 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:6099ms (threshold=300ms)
>> 2015-07-28 15:59:21,130 INFO
>> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
>> BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing
>> metadata files:0, missing block files:0, missing blocks in memory:0,
>> mismatched blocks:0
>> 2015-07-28 16:00:16,770 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /
>> 10.32.1.12:36998 dest: /10.32.1.12:50010
>> 2015-07-28 16:00:17,469 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration:
>> 3601152263901
>> 2015-07-28 16:00:17,472 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 16:03:44,011 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /
>> 10.0.0.19:35851 dest: /10.32.1.12:50010
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration:
>> 3600482062810
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443,
>> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2015-07-28 16:11:10,961 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:7884ms (threshold=300ms)
>> 2015-07-28 16:11:14,122 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4362ms (threshold=300ms)
>> 2015-07-28 16:11:14,123 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took
>> 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
>> 2015-07-28 16:13:29,968 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:659ms (threshold=300ms)
>> 2015-07-28 16:18:33,336 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /
>> 10.0.0.20:41527 dest: /10.32.1.12:50010
>> 2015-07-28 16:18:38,926 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:1767ms (threshold=300ms)
>> 2015-07-28 16:28:40,580 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4099ms (threshold=300ms)
>>
>> What could be the cause of this?
>> Thank you.
>>
>>
>>
>


-- 
*  Regards*
*  Sandeep Nemuri*

Re: HDFS datanode used space is increasing without any writes

Posted by Ted Yu <yu...@gmail.com>.
Please take a look at HDFS-6133 which aims to help with hbase data locality.

It was integrated to hadoop 2.7.0 release.

FYI

On Thu, Jul 30, 2015 at 3:06 AM, Akmal Abbasov <ak...@icloud.com>
wrote:

> I am running HBase snapshot exporting, but I stopped it, and still the
> capacity used is increasing.
> Here you can see that it is increased to 60 GB, and mostly it is because
> of 1 detanode.
> Moreover I am checking the directories sizes using bin/hdfs dfs -du -h /
> and while the HDFS used capacity is increasing, the directories sizes in
> HDFS is not changing.
> Any ideas?
>
> p.s. I started a hdfs balancer several days ago, then stopped it after
> running for about 10 minutes, after reading that it is not a good idea to
> run it on the same cluster with HBase.
> Could it be because of this?
>
> Thank you.
>
>
> On 28 Jul 2015, at 19:08, Harshit Mathur <ma...@gmail.com> wrote:
>
> Are there any map reduce jobs running?
> On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <ak...@icloud.com>
> wrote:
>
>> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
>> The disk space of one of datanodes is increasing very fast even when
>> there are no write requests.
>> It is 8GB per hour in average. Here is the graph which shows it.
>> <screenshot.png>
>> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
>>
>> And this is logs from the node
>> 2015-07-28 15:40:38,795 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ,
>> cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID:
>> 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration:
>> 17759797
>> 2015-07-28 15:41:15,111 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /
>> 10.0.0.21:60540 dest: /10.32.1.12:50010
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration:
>> 3600203675041
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 15:50:40,745 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:6099ms (threshold=300ms)
>> 2015-07-28 15:59:21,130 INFO
>> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
>> BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing
>> metadata files:0, missing block files:0, missing blocks in memory:0,
>> mismatched blocks:0
>> 2015-07-28 16:00:16,770 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /
>> 10.32.1.12:36998 dest: /10.32.1.12:50010
>> 2015-07-28 16:00:17,469 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration:
>> 3601152263901
>> 2015-07-28 16:00:17,472 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 16:03:44,011 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /
>> 10.0.0.19:35851 dest: /10.32.1.12:50010
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration:
>> 3600482062810
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443,
>> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2015-07-28 16:11:10,961 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:7884ms (threshold=300ms)
>> 2015-07-28 16:11:14,122 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4362ms (threshold=300ms)
>> 2015-07-28 16:11:14,123 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took
>> 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
>> 2015-07-28 16:13:29,968 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:659ms (threshold=300ms)
>> 2015-07-28 16:18:33,336 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /
>> 10.0.0.20:41527 dest: /10.32.1.12:50010
>> 2015-07-28 16:18:38,926 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:1767ms (threshold=300ms)
>> 2015-07-28 16:28:40,580 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4099ms (threshold=300ms)
>>
>> What could be the cause of this?
>> Thank you.
>>
>>
>>
>

Re: HDFS datanode used space is increasing without any writes

Posted by Ted Yu <yu...@gmail.com>.
Please take a look at HDFS-6133 which aims to help with hbase data locality.

It was integrated to hadoop 2.7.0 release.

FYI

On Thu, Jul 30, 2015 at 3:06 AM, Akmal Abbasov <ak...@icloud.com>
wrote:

> I am running HBase snapshot exporting, but I stopped it, and still the
> capacity used is increasing.
> Here you can see that it is increased to 60 GB, and mostly it is because
> of 1 detanode.
> Moreover I am checking the directories sizes using bin/hdfs dfs -du -h /
> and while the HDFS used capacity is increasing, the directories sizes in
> HDFS is not changing.
> Any ideas?
>
> p.s. I started a hdfs balancer several days ago, then stopped it after
> running for about 10 minutes, after reading that it is not a good idea to
> run it on the same cluster with HBase.
> Could it be because of this?
>
> Thank you.
>
>
> On 28 Jul 2015, at 19:08, Harshit Mathur <ma...@gmail.com> wrote:
>
> Are there any map reduce jobs running?
> On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <ak...@icloud.com>
> wrote:
>
>> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
>> The disk space of one of datanodes is increasing very fast even when
>> there are no write requests.
>> It is 8GB per hour in average. Here is the graph which shows it.
>> <screenshot.png>
>> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
>>
>> And this is logs from the node
>> 2015-07-28 15:40:38,795 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ,
>> cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID:
>> 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration:
>> 17759797
>> 2015-07-28 15:41:15,111 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /
>> 10.0.0.21:60540 dest: /10.32.1.12:50010
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration:
>> 3600203675041
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 15:50:40,745 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:6099ms (threshold=300ms)
>> 2015-07-28 15:59:21,130 INFO
>> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
>> BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing
>> metadata files:0, missing block files:0, missing blocks in memory:0,
>> mismatched blocks:0
>> 2015-07-28 16:00:16,770 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /
>> 10.32.1.12:36998 dest: /10.32.1.12:50010
>> 2015-07-28 16:00:17,469 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration:
>> 3601152263901
>> 2015-07-28 16:00:17,472 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 16:03:44,011 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /
>> 10.0.0.19:35851 dest: /10.32.1.12:50010
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration:
>> 3600482062810
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443,
>> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2015-07-28 16:11:10,961 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:7884ms (threshold=300ms)
>> 2015-07-28 16:11:14,122 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4362ms (threshold=300ms)
>> 2015-07-28 16:11:14,123 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took
>> 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
>> 2015-07-28 16:13:29,968 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:659ms (threshold=300ms)
>> 2015-07-28 16:18:33,336 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /
>> 10.0.0.20:41527 dest: /10.32.1.12:50010
>> 2015-07-28 16:18:38,926 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:1767ms (threshold=300ms)
>> 2015-07-28 16:28:40,580 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4099ms (threshold=300ms)
>>
>> What could be the cause of this?
>> Thank you.
>>
>>
>>
>

Re: HDFS datanode used space is increasing without any writes

Posted by Sandeep Nemuri <nh...@gmail.com>.
What is the size of your Hbase table?

A copy of snapshot will be stored in archive directory.

hadoop fs -du -s -h /apps/hbase/data/data/default/<table-name>
hadoop fs -du -s -h /apps/hbase/data/archive/data/default/<table-name>

Check this directory size.

Thanks
Sandeep Nemuri
ᐧ

On Thu, Jul 30, 2015 at 3:36 PM, Akmal Abbasov <ak...@icloud.com>
wrote:

> I am running HBase snapshot exporting, but I stopped it, and still the
> capacity used is increasing.
> Here you can see that it is increased to 60 GB, and mostly it is because
> of 1 detanode.
> Moreover I am checking the directories sizes using bin/hdfs dfs -du -h /
> and while the HDFS used capacity is increasing, the directories sizes in
> HDFS is not changing.
> Any ideas?
>
> p.s. I started a hdfs balancer several days ago, then stopped it after
> running for about 10 minutes, after reading that it is not a good idea to
> run it on the same cluster with HBase.
> Could it be because of this?
>
> Thank you.
>
>
> On 28 Jul 2015, at 19:08, Harshit Mathur <ma...@gmail.com> wrote:
>
> Are there any map reduce jobs running?
> On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <ak...@icloud.com>
> wrote:
>
>> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
>> The disk space of one of datanodes is increasing very fast even when
>> there are no write requests.
>> It is 8GB per hour in average. Here is the graph which shows it.
>> <screenshot.png>
>> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
>>
>> And this is logs from the node
>> 2015-07-28 15:40:38,795 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ,
>> cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID:
>> 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration:
>> 17759797
>> 2015-07-28 15:41:15,111 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /
>> 10.0.0.21:60540 dest: /10.32.1.12:50010
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration:
>> 3600203675041
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 15:50:40,745 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:6099ms (threshold=300ms)
>> 2015-07-28 15:59:21,130 INFO
>> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
>> BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing
>> metadata files:0, missing block files:0, missing blocks in memory:0,
>> mismatched blocks:0
>> 2015-07-28 16:00:16,770 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /
>> 10.32.1.12:36998 dest: /10.32.1.12:50010
>> 2015-07-28 16:00:17,469 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration:
>> 3601152263901
>> 2015-07-28 16:00:17,472 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 16:03:44,011 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /
>> 10.0.0.19:35851 dest: /10.32.1.12:50010
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration:
>> 3600482062810
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443,
>> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2015-07-28 16:11:10,961 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:7884ms (threshold=300ms)
>> 2015-07-28 16:11:14,122 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4362ms (threshold=300ms)
>> 2015-07-28 16:11:14,123 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took
>> 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
>> 2015-07-28 16:13:29,968 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:659ms (threshold=300ms)
>> 2015-07-28 16:18:33,336 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /
>> 10.0.0.20:41527 dest: /10.32.1.12:50010
>> 2015-07-28 16:18:38,926 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:1767ms (threshold=300ms)
>> 2015-07-28 16:28:40,580 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4099ms (threshold=300ms)
>>
>> What could be the cause of this?
>> Thank you.
>>
>>
>>
>


-- 
*  Regards*
*  Sandeep Nemuri*

Re: HDFS datanode used space is increasing without any writes

Posted by Ted Yu <yu...@gmail.com>.
Please take a look at HDFS-6133 which aims to help with hbase data locality.

It was integrated to hadoop 2.7.0 release.

FYI

On Thu, Jul 30, 2015 at 3:06 AM, Akmal Abbasov <ak...@icloud.com>
wrote:

> I am running HBase snapshot exporting, but I stopped it, and still the
> capacity used is increasing.
> Here you can see that it is increased to 60 GB, and mostly it is because
> of 1 detanode.
> Moreover I am checking the directories sizes using bin/hdfs dfs -du -h /
> and while the HDFS used capacity is increasing, the directories sizes in
> HDFS is not changing.
> Any ideas?
>
> p.s. I started a hdfs balancer several days ago, then stopped it after
> running for about 10 minutes, after reading that it is not a good idea to
> run it on the same cluster with HBase.
> Could it be because of this?
>
> Thank you.
>
>
> On 28 Jul 2015, at 19:08, Harshit Mathur <ma...@gmail.com> wrote:
>
> Are there any map reduce jobs running?
> On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <ak...@icloud.com>
> wrote:
>
>> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
>> The disk space of one of datanodes is increasing very fast even when
>> there are no write requests.
>> It is 8GB per hour in average. Here is the graph which shows it.
>> <screenshot.png>
>> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
>>
>> And this is logs from the node
>> 2015-07-28 15:40:38,795 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ,
>> cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID:
>> 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration:
>> 17759797
>> 2015-07-28 15:41:15,111 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /
>> 10.0.0.21:60540 dest: /10.32.1.12:50010
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration:
>> 3600203675041
>> 2015-07-28 15:41:15,304 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 15:50:40,745 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:6099ms (threshold=300ms)
>> 2015-07-28 15:59:21,130 INFO
>> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
>> BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing
>> metadata files:0, missing block files:0, missing blocks in memory:0,
>> mismatched blocks:0
>> 2015-07-28 16:00:16,770 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /
>> 10.32.1.12:36998 dest: /10.32.1.12:50010
>> 2015-07-28 16:00:17,469 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration:
>> 3601152263901
>> 2015-07-28 16:00:17,472 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442,
>> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
>> 2015-07-28 16:03:44,011 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /
>> 10.0.0.19:35851 dest: /10.32.1.12:50010
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
>> 10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE,
>> cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset:
>> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration:
>> 3600482062810
>> 2015-07-28 16:03:44,169 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443,
>> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
>> 2015-07-28 16:11:10,961 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:7884ms (threshold=300ms)
>> 2015-07-28 16:11:14,122 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4362ms (threshold=300ms)
>> 2015-07-28 16:11:14,123 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took
>> 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
>> 2015-07-28 16:13:29,968 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:659ms (threshold=300ms)
>> 2015-07-28 16:18:33,336 INFO
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
>> BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /
>> 10.0.0.20:41527 dest: /10.32.1.12:50010
>> 2015-07-28 16:18:38,926 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:1767ms (threshold=300ms)
>> 2015-07-28 16:28:40,580 WARN
>> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
>> data to disk cost:4099ms (threshold=300ms)
>>
>> What could be the cause of this?
>> Thank you.
>>
>>
>>
>

Re: HDFS datanode used space is increasing without any writes

Posted by Akmal Abbasov <ak...@icloud.com>.
I am running HBase snapshot exporting, but I stopped it, and still the capacity used is increasing.

Here you can see that it is increased to 60 GB, and mostly it is because of 1 detanode.
Moreover I am checking the directories sizes using bin/hdfs dfs -du -h /
and while the HDFS used capacity is increasing, the directories sizes in HDFS is not changing.
Any ideas?

p.s. I started a hdfs balancer several days ago, then stopped it after running for about 10 minutes, after reading that it is not a good idea to run it on the same cluster with HBase.
Could it be because of this?

Thank you.


> On 28 Jul 2015, at 19:08, Harshit Mathur <ma...@gmail.com> wrote:
> 
> Are there any map reduce jobs running?
> 
> On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <akmal.abbasov@icloud.com <ma...@icloud.com>> wrote:
> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
> The disk space of one of datanodes is increasing very fast even when there are no write requests. 
> It is 8GB per hour in average. Here is the graph which shows it.
> <screenshot.png>
> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
> 
> And this is logs from the node
> 2015-07-28 15:40:38,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:50010 <http://10.32.1.12:50010/>, dest: /10.32.0.140:38699 <http://10.32.0.140:38699/>, bytes: 1071, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration: 17759797
> 2015-07-28 15:41:15,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /10.0.0.21:60540 <http://10.0.0.21:60540/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.21:59054 <http://10.0.0.21:59054/>, dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>, bytes: 124121, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration: 3600203675041
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 15:50:40,745 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:6099ms (threshold=300ms)
> 2015-07-28 15:59:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0
> 2015-07-28 16:00:16,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /10.32.1.12:36998 <http://10.32.1.12:36998/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 16:00:17,469 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:36150 <http://10.32.1.12:36150/>, dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>, bytes: 32688, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration: 3601152263901
> 2015-07-28 16:00:17,472 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 16:03:44,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /10.0.0.19:35851 <http://10.0.0.19:35851/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.20:40176 <http://10.0.0.20:40176/>, dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>, bytes: 316062, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration: 3600482062810
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-07-28 16:11:10,961 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:7884ms (threshold=300ms)
> 2015-07-28 16:11:14,122 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4362ms (threshold=300ms)
> 2015-07-28 16:11:14,123 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
> 2015-07-28 16:13:29,968 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:659ms (threshold=300ms)
> 2015-07-28 16:18:33,336 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /10.0.0.20:41527 <http://10.0.0.20:41527/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 16:18:38,926 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:1767ms (threshold=300ms)
> 2015-07-28 16:28:40,580 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4099ms (threshold=300ms)
> 
> What could be the cause of this?
> Thank you.
> 
> 


Re: HDFS datanode used space is increasing without any writes

Posted by Akmal Abbasov <ak...@icloud.com>.
I am running HBase snapshot exporting, but I stopped it, and still the capacity used is increasing.

Here you can see that it is increased to 60 GB, and mostly it is because of 1 detanode.
Moreover I am checking the directories sizes using bin/hdfs dfs -du -h /
and while the HDFS used capacity is increasing, the directories sizes in HDFS is not changing.
Any ideas?

p.s. I started a hdfs balancer several days ago, then stopped it after running for about 10 minutes, after reading that it is not a good idea to run it on the same cluster with HBase.
Could it be because of this?

Thank you.


> On 28 Jul 2015, at 19:08, Harshit Mathur <ma...@gmail.com> wrote:
> 
> Are there any map reduce jobs running?
> 
> On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <akmal.abbasov@icloud.com <ma...@icloud.com>> wrote:
> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
> The disk space of one of datanodes is increasing very fast even when there are no write requests. 
> It is 8GB per hour in average. Here is the graph which shows it.
> <screenshot.png>
> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
> 
> And this is logs from the node
> 2015-07-28 15:40:38,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:50010 <http://10.32.1.12:50010/>, dest: /10.32.0.140:38699 <http://10.32.0.140:38699/>, bytes: 1071, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration: 17759797
> 2015-07-28 15:41:15,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /10.0.0.21:60540 <http://10.0.0.21:60540/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.21:59054 <http://10.0.0.21:59054/>, dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>, bytes: 124121, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration: 3600203675041
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 15:50:40,745 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:6099ms (threshold=300ms)
> 2015-07-28 15:59:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0
> 2015-07-28 16:00:16,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /10.32.1.12:36998 <http://10.32.1.12:36998/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 16:00:17,469 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:36150 <http://10.32.1.12:36150/>, dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>, bytes: 32688, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration: 3601152263901
> 2015-07-28 16:00:17,472 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 16:03:44,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /10.0.0.19:35851 <http://10.0.0.19:35851/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.20:40176 <http://10.0.0.20:40176/>, dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>, bytes: 316062, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration: 3600482062810
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-07-28 16:11:10,961 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:7884ms (threshold=300ms)
> 2015-07-28 16:11:14,122 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4362ms (threshold=300ms)
> 2015-07-28 16:11:14,123 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
> 2015-07-28 16:13:29,968 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:659ms (threshold=300ms)
> 2015-07-28 16:18:33,336 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /10.0.0.20:41527 <http://10.0.0.20:41527/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 16:18:38,926 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:1767ms (threshold=300ms)
> 2015-07-28 16:28:40,580 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4099ms (threshold=300ms)
> 
> What could be the cause of this?
> Thank you.
> 
> 


Re: HDFS datanode used space is increasing without any writes

Posted by Akmal Abbasov <ak...@icloud.com>.
I am running HBase snapshot exporting, but I stopped it, and still the capacity used is increasing.

Here you can see that it is increased to 60 GB, and mostly it is because of 1 detanode.
Moreover I am checking the directories sizes using bin/hdfs dfs -du -h /
and while the HDFS used capacity is increasing, the directories sizes in HDFS is not changing.
Any ideas?

p.s. I started a hdfs balancer several days ago, then stopped it after running for about 10 minutes, after reading that it is not a good idea to run it on the same cluster with HBase.
Could it be because of this?

Thank you.


> On 28 Jul 2015, at 19:08, Harshit Mathur <ma...@gmail.com> wrote:
> 
> Are there any map reduce jobs running?
> 
> On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <akmal.abbasov@icloud.com <ma...@icloud.com>> wrote:
> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
> The disk space of one of datanodes is increasing very fast even when there are no write requests. 
> It is 8GB per hour in average. Here is the graph which shows it.
> <screenshot.png>
> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
> 
> And this is logs from the node
> 2015-07-28 15:40:38,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:50010 <http://10.32.1.12:50010/>, dest: /10.32.0.140:38699 <http://10.32.0.140:38699/>, bytes: 1071, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration: 17759797
> 2015-07-28 15:41:15,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /10.0.0.21:60540 <http://10.0.0.21:60540/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.21:59054 <http://10.0.0.21:59054/>, dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>, bytes: 124121, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration: 3600203675041
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 15:50:40,745 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:6099ms (threshold=300ms)
> 2015-07-28 15:59:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0
> 2015-07-28 16:00:16,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /10.32.1.12:36998 <http://10.32.1.12:36998/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 16:00:17,469 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:36150 <http://10.32.1.12:36150/>, dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>, bytes: 32688, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration: 3601152263901
> 2015-07-28 16:00:17,472 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 16:03:44,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /10.0.0.19:35851 <http://10.0.0.19:35851/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.20:40176 <http://10.0.0.20:40176/>, dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>, bytes: 316062, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration: 3600482062810
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-07-28 16:11:10,961 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:7884ms (threshold=300ms)
> 2015-07-28 16:11:14,122 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4362ms (threshold=300ms)
> 2015-07-28 16:11:14,123 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
> 2015-07-28 16:13:29,968 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:659ms (threshold=300ms)
> 2015-07-28 16:18:33,336 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /10.0.0.20:41527 <http://10.0.0.20:41527/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 16:18:38,926 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:1767ms (threshold=300ms)
> 2015-07-28 16:28:40,580 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4099ms (threshold=300ms)
> 
> What could be the cause of this?
> Thank you.
> 
> 


Re: HDFS datanode used space is increasing without any writes

Posted by Akmal Abbasov <ak...@icloud.com>.
I am running HBase snapshot exporting, but I stopped it, and still the capacity used is increasing.

Here you can see that it is increased to 60 GB, and mostly it is because of 1 detanode.
Moreover I am checking the directories sizes using bin/hdfs dfs -du -h /
and while the HDFS used capacity is increasing, the directories sizes in HDFS is not changing.
Any ideas?

p.s. I started a hdfs balancer several days ago, then stopped it after running for about 10 minutes, after reading that it is not a good idea to run it on the same cluster with HBase.
Could it be because of this?

Thank you.


> On 28 Jul 2015, at 19:08, Harshit Mathur <ma...@gmail.com> wrote:
> 
> Are there any map reduce jobs running?
> 
> On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <akmal.abbasov@icloud.com <ma...@icloud.com>> wrote:
> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
> The disk space of one of datanodes is increasing very fast even when there are no write requests. 
> It is 8GB per hour in average. Here is the graph which shows it.
> <screenshot.png>
> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
> 
> And this is logs from the node
> 2015-07-28 15:40:38,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:50010 <http://10.32.1.12:50010/>, dest: /10.32.0.140:38699 <http://10.32.0.140:38699/>, bytes: 1071, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration: 17759797
> 2015-07-28 15:41:15,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /10.0.0.21:60540 <http://10.0.0.21:60540/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.21:59054 <http://10.0.0.21:59054/>, dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>, bytes: 124121, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration: 3600203675041
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 15:50:40,745 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:6099ms (threshold=300ms)
> 2015-07-28 15:59:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0
> 2015-07-28 16:00:16,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /10.32.1.12:36998 <http://10.32.1.12:36998/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 16:00:17,469 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:36150 <http://10.32.1.12:36150/>, dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>, bytes: 32688, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration: 3601152263901
> 2015-07-28 16:00:17,472 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 16:03:44,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /10.0.0.19:35851 <http://10.0.0.19:35851/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.20:40176 <http://10.0.0.20:40176/>, dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>, bytes: 316062, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration: 3600482062810
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-07-28 16:11:10,961 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:7884ms (threshold=300ms)
> 2015-07-28 16:11:14,122 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4362ms (threshold=300ms)
> 2015-07-28 16:11:14,123 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
> 2015-07-28 16:13:29,968 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:659ms (threshold=300ms)
> 2015-07-28 16:18:33,336 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /10.0.0.20:41527 <http://10.0.0.20:41527/> dest: /10.32.1.12:50010 <http://10.32.1.12:50010/>
> 2015-07-28 16:18:38,926 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:1767ms (threshold=300ms)
> 2015-07-28 16:28:40,580 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4099ms (threshold=300ms)
> 
> What could be the cause of this?
> Thank you.
> 
> 


Re: HDFS datanode used space is increasing without any writes

Posted by Harshit Mathur <ma...@gmail.com>.
Are there any map reduce jobs running?
On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <ak...@icloud.com> wrote:

> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
> The disk space of one of datanodes is increasing very fast even when there
> are no write requests.
> It is 8GB per hour in average. Here is the graph which shows it.
> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
>
> And this is logs from the node
> 2015-07-28 15:40:38,795 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ,
> cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID:
> 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration:
> 17759797
> 2015-07-28 15:41:15,111 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /
> 10.0.0.21:60540 dest: /10.32.1.12:50010
> 2015-07-28 15:41:15,304 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE,
> cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset:
> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration:
> 3600203675041
> 2015-07-28 15:41:15,304 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238,
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 15:50:40,745 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:6099ms (threshold=300ms)
> 2015-07-28 15:59:21,130 INFO
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
> BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing
> metadata files:0, missing block files:0, missing blocks in memory:0,
> mismatched blocks:0
> 2015-07-28 16:00:16,770 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /
> 10.32.1.12:36998 dest: /10.32.1.12:50010
> 2015-07-28 16:00:17,469 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE,
> cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset:
> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration:
> 3601152263901
> 2015-07-28 16:00:17,472 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442,
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 16:03:44,011 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /
> 10.0.0.19:35851 dest: /10.32.1.12:50010
> 2015-07-28 16:03:44,169 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE,
> cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset:
> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration:
> 3600482062810
> 2015-07-28 16:03:44,169 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443,
> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-07-28 16:11:10,961 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:7884ms (threshold=300ms)
> 2015-07-28 16:11:14,122 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:4362ms (threshold=300ms)
> 2015-07-28 16:11:14,123 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took
> 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
> 2015-07-28 16:13:29,968 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:659ms (threshold=300ms)
> 2015-07-28 16:18:33,336 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /
> 10.0.0.20:41527 dest: /10.32.1.12:50010
> 2015-07-28 16:18:38,926 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:1767ms (threshold=300ms)
> 2015-07-28 16:28:40,580 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:4099ms (threshold=300ms)
>
> What could be the cause of this?
> Thank you.
>
>
>

Re: HDFS datanode used space is increasing without any writes

Posted by Ted Yu <yu...@gmail.com>.
From log below, hbase-rs4 was writing to the datanode. 

Can you take a look at region server log and see if there is some clue ?

Thanks 



> On Jul 28, 2015, at 9:41 AM, Akmal Abbasov <ak...@icloud.com> wrote:
> 
> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
> The disk space of one of datanodes is increasing very fast even when there are no write requests. 
> It is 8GB per hour in average. Here is the graph which shows it.
> <screenshot.png>
> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
> 
> And this is logs from the node
> 2015-07-28 15:40:38,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration: 17759797
> 2015-07-28 15:41:15,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /10.0.0.21:60540 dest: /10.32.1.12:50010
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration: 3600203675041
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 15:50:40,745 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:6099ms (threshold=300ms)
> 2015-07-28 15:59:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0
> 2015-07-28 16:00:16,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /10.32.1.12:36998 dest: /10.32.1.12:50010
> 2015-07-28 16:00:17,469 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration: 3601152263901
> 2015-07-28 16:00:17,472 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 16:03:44,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /10.0.0.19:35851 dest: /10.32.1.12:50010
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration: 3600482062810
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-07-28 16:11:10,961 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:7884ms (threshold=300ms)
> 2015-07-28 16:11:14,122 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4362ms (threshold=300ms)
> 2015-07-28 16:11:14,123 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
> 2015-07-28 16:13:29,968 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:659ms (threshold=300ms)
> 2015-07-28 16:18:33,336 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /10.0.0.20:41527 dest: /10.32.1.12:50010
> 2015-07-28 16:18:38,926 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:1767ms (threshold=300ms)
> 2015-07-28 16:28:40,580 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4099ms (threshold=300ms)
> 
> What could be the cause of this?
> Thank you.
> 
> 

Re: HDFS datanode used space is increasing without any writes

Posted by Ted Yu <yu...@gmail.com>.
From log below, hbase-rs4 was writing to the datanode. 

Can you take a look at region server log and see if there is some clue ?

Thanks 



> On Jul 28, 2015, at 9:41 AM, Akmal Abbasov <ak...@icloud.com> wrote:
> 
> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
> The disk space of one of datanodes is increasing very fast even when there are no write requests. 
> It is 8GB per hour in average. Here is the graph which shows it.
> <screenshot.png>
> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
> 
> And this is logs from the node
> 2015-07-28 15:40:38,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration: 17759797
> 2015-07-28 15:41:15,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /10.0.0.21:60540 dest: /10.32.1.12:50010
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration: 3600203675041
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 15:50:40,745 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:6099ms (threshold=300ms)
> 2015-07-28 15:59:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0
> 2015-07-28 16:00:16,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /10.32.1.12:36998 dest: /10.32.1.12:50010
> 2015-07-28 16:00:17,469 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration: 3601152263901
> 2015-07-28 16:00:17,472 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 16:03:44,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /10.0.0.19:35851 dest: /10.32.1.12:50010
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration: 3600482062810
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-07-28 16:11:10,961 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:7884ms (threshold=300ms)
> 2015-07-28 16:11:14,122 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4362ms (threshold=300ms)
> 2015-07-28 16:11:14,123 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
> 2015-07-28 16:13:29,968 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:659ms (threshold=300ms)
> 2015-07-28 16:18:33,336 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /10.0.0.20:41527 dest: /10.32.1.12:50010
> 2015-07-28 16:18:38,926 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:1767ms (threshold=300ms)
> 2015-07-28 16:28:40,580 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4099ms (threshold=300ms)
> 
> What could be the cause of this?
> Thank you.
> 
> 

Re: HDFS datanode used space is increasing without any writes

Posted by Ted Yu <yu...@gmail.com>.
From log below, hbase-rs4 was writing to the datanode. 

Can you take a look at region server log and see if there is some clue ?

Thanks 



> On Jul 28, 2015, at 9:41 AM, Akmal Abbasov <ak...@icloud.com> wrote:
> 
> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
> The disk space of one of datanodes is increasing very fast even when there are no write requests. 
> It is 8GB per hour in average. Here is the graph which shows it.
> <screenshot.png>
> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
> 
> And this is logs from the node
> 2015-07-28 15:40:38,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration: 17759797
> 2015-07-28 15:41:15,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /10.0.0.21:60540 dest: /10.32.1.12:50010
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration: 3600203675041
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 15:50:40,745 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:6099ms (threshold=300ms)
> 2015-07-28 15:59:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0
> 2015-07-28 16:00:16,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /10.32.1.12:36998 dest: /10.32.1.12:50010
> 2015-07-28 16:00:17,469 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration: 3601152263901
> 2015-07-28 16:00:17,472 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 16:03:44,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /10.0.0.19:35851 dest: /10.32.1.12:50010
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration: 3600482062810
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-07-28 16:11:10,961 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:7884ms (threshold=300ms)
> 2015-07-28 16:11:14,122 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4362ms (threshold=300ms)
> 2015-07-28 16:11:14,123 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
> 2015-07-28 16:13:29,968 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:659ms (threshold=300ms)
> 2015-07-28 16:18:33,336 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /10.0.0.20:41527 dest: /10.32.1.12:50010
> 2015-07-28 16:18:38,926 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:1767ms (threshold=300ms)
> 2015-07-28 16:28:40,580 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4099ms (threshold=300ms)
> 
> What could be the cause of this?
> Thank you.
> 
> 

Re: HDFS datanode used space is increasing without any writes

Posted by Harshit Mathur <ma...@gmail.com>.
Are there any map reduce jobs running?
On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <ak...@icloud.com> wrote:

> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
> The disk space of one of datanodes is increasing very fast even when there
> are no write requests.
> It is 8GB per hour in average. Here is the graph which shows it.
> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
>
> And this is logs from the node
> 2015-07-28 15:40:38,795 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ,
> cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID:
> 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration:
> 17759797
> 2015-07-28 15:41:15,111 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /
> 10.0.0.21:60540 dest: /10.32.1.12:50010
> 2015-07-28 15:41:15,304 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE,
> cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset:
> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration:
> 3600203675041
> 2015-07-28 15:41:15,304 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238,
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 15:50:40,745 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:6099ms (threshold=300ms)
> 2015-07-28 15:59:21,130 INFO
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
> BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing
> metadata files:0, missing block files:0, missing blocks in memory:0,
> mismatched blocks:0
> 2015-07-28 16:00:16,770 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /
> 10.32.1.12:36998 dest: /10.32.1.12:50010
> 2015-07-28 16:00:17,469 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE,
> cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset:
> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration:
> 3601152263901
> 2015-07-28 16:00:17,472 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442,
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 16:03:44,011 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /
> 10.0.0.19:35851 dest: /10.32.1.12:50010
> 2015-07-28 16:03:44,169 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE,
> cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset:
> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration:
> 3600482062810
> 2015-07-28 16:03:44,169 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443,
> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-07-28 16:11:10,961 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:7884ms (threshold=300ms)
> 2015-07-28 16:11:14,122 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:4362ms (threshold=300ms)
> 2015-07-28 16:11:14,123 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took
> 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
> 2015-07-28 16:13:29,968 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:659ms (threshold=300ms)
> 2015-07-28 16:18:33,336 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /
> 10.0.0.20:41527 dest: /10.32.1.12:50010
> 2015-07-28 16:18:38,926 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:1767ms (threshold=300ms)
> 2015-07-28 16:28:40,580 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:4099ms (threshold=300ms)
>
> What could be the cause of this?
> Thank you.
>
>
>

Re: HDFS datanode used space is increasing without any writes

Posted by Ted Yu <yu...@gmail.com>.
From log below, hbase-rs4 was writing to the datanode. 

Can you take a look at region server log and see if there is some clue ?

Thanks 



> On Jul 28, 2015, at 9:41 AM, Akmal Abbasov <ak...@icloud.com> wrote:
> 
> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
> The disk space of one of datanodes is increasing very fast even when there are no write requests. 
> It is 8GB per hour in average. Here is the graph which shows it.
> <screenshot.png>
> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
> 
> And this is logs from the node
> 2015-07-28 15:40:38,795 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ, cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration: 17759797
> 2015-07-28 15:41:15,111 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /10.0.0.21:60540 dest: /10.32.1.12:50010
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration: 3600203675041
> 2015-07-28 15:41:15,304 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 15:50:40,745 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:6099ms (threshold=300ms)
> 2015-07-28 15:59:21,130 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing metadata files:0, missing block files:0, missing blocks in memory:0, mismatched blocks:0
> 2015-07-28 16:00:16,770 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /10.32.1.12:36998 dest: /10.32.1.12:50010
> 2015-07-28 16:00:17,469 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration: 3601152263901
> 2015-07-28 16:00:17,472 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 16:03:44,011 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /10.0.0.19:35851 dest: /10.32.1.12:50010
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE, cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset: 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration: 3600482062810
> 2015-07-28 16:03:44,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-07-28 16:11:10,961 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:7884ms (threshold=300ms)
> 2015-07-28 16:11:14,122 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4362ms (threshold=300ms)
> 2015-07-28 16:11:14,123 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
> 2015-07-28 16:13:29,968 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:659ms (threshold=300ms)
> 2015-07-28 16:18:33,336 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /10.0.0.20:41527 dest: /10.32.1.12:50010
> 2015-07-28 16:18:38,926 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:1767ms (threshold=300ms)
> 2015-07-28 16:28:40,580 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write data to disk cost:4099ms (threshold=300ms)
> 
> What could be the cause of this?
> Thank you.
> 
> 

Re: HDFS datanode used space is increasing without any writes

Posted by Harshit Mathur <ma...@gmail.com>.
Are there any map reduce jobs running?
On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <ak...@icloud.com> wrote:

> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
> The disk space of one of datanodes is increasing very fast even when there
> are no write requests.
> It is 8GB per hour in average. Here is the graph which shows it.
> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
>
> And this is logs from the node
> 2015-07-28 15:40:38,795 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ,
> cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID:
> 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration:
> 17759797
> 2015-07-28 15:41:15,111 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /
> 10.0.0.21:60540 dest: /10.32.1.12:50010
> 2015-07-28 15:41:15,304 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE,
> cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset:
> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration:
> 3600203675041
> 2015-07-28 15:41:15,304 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238,
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 15:50:40,745 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:6099ms (threshold=300ms)
> 2015-07-28 15:59:21,130 INFO
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
> BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing
> metadata files:0, missing block files:0, missing blocks in memory:0,
> mismatched blocks:0
> 2015-07-28 16:00:16,770 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /
> 10.32.1.12:36998 dest: /10.32.1.12:50010
> 2015-07-28 16:00:17,469 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE,
> cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset:
> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration:
> 3601152263901
> 2015-07-28 16:00:17,472 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442,
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 16:03:44,011 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /
> 10.0.0.19:35851 dest: /10.32.1.12:50010
> 2015-07-28 16:03:44,169 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE,
> cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset:
> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration:
> 3600482062810
> 2015-07-28 16:03:44,169 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443,
> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-07-28 16:11:10,961 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:7884ms (threshold=300ms)
> 2015-07-28 16:11:14,122 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:4362ms (threshold=300ms)
> 2015-07-28 16:11:14,123 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took
> 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
> 2015-07-28 16:13:29,968 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:659ms (threshold=300ms)
> 2015-07-28 16:18:33,336 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /
> 10.0.0.20:41527 dest: /10.32.1.12:50010
> 2015-07-28 16:18:38,926 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:1767ms (threshold=300ms)
> 2015-07-28 16:28:40,580 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:4099ms (threshold=300ms)
>
> What could be the cause of this?
> Thank you.
>
>
>

Re: HDFS datanode used space is increasing without any writes

Posted by Harshit Mathur <ma...@gmail.com>.
Are there any map reduce jobs running?
On Jul 28, 2015 10:11 PM, "Akmal Abbasov" <ak...@icloud.com> wrote:

> Hi, I’m observing strange behaviour in HDFS/HBase cluster.
> The disk space of one of datanodes is increasing very fast even when there
> are no write requests.
> It is 8GB per hour in average. Here is the graph which shows it.
> I am using hbase-0.98.7-hadoop2 and hadoop-2.5.1.
>
> And this is logs from the node
> 2015-07-28 15:40:38,795 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.32.1.12:50010, dest: /10.32.0.140:38699, bytes: 1071, op: HDFS_READ,
> cliID: DFSClient_NONMAPREDUCE_-689748537_1, offset: 0, srvID:
> 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1074784244_1045663, duration:
> 17759797
> 2015-07-28 15:41:15,111 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311714_1574450 src: /
> 10.0.0.21:60540 dest: /10.32.1.12:50010
> 2015-07-28 15:41:15,304 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.0.0.21:59054, dest: /10.32.1.12:50010, bytes: 124121, op: HDFS_WRITE,
> cliID: DFSClient_hb_rs_hbase-rs4,60020,1438094355024_530940245_35, offset:
> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238, duration:
> 3600203675041
> 2015-07-28 15:41:15,304 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311536_1574238,
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 15:50:40,745 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:6099ms (threshold=300ms)
> 2015-07-28 15:59:21,130 INFO
> org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: BlockPool
> BP-439084760-10.32.0.180-1387281790961 Total blocks: 65856, missing
> metadata files:0, missing block files:0, missing blocks in memory:0,
> mismatched blocks:0
> 2015-07-28 16:00:16,770 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311715_1574451 src: /
> 10.32.1.12:36998 dest: /10.32.1.12:50010
> 2015-07-28 16:00:17,469 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.32.1.12:36150, dest: /10.32.1.12:50010, bytes: 32688, op: HDFS_WRITE,
> cliID: DFSClient_hb_rs_hbase-rs5,60020,1438088401479_1146354759_35, offset:
> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442, duration:
> 3601152263901
> 2015-07-28 16:00:17,472 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311706_1574442,
> type=HAS_DOWNSTREAM_IN_PIPELINE terminating
> 2015-07-28 16:03:44,011 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311716_1574452 src: /
> 10.0.0.19:35851 dest: /10.32.1.12:50010
> 2015-07-28 16:03:44,169 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /
> 10.0.0.20:40176, dest: /10.32.1.12:50010, bytes: 316062, op: HDFS_WRITE,
> cliID: DFSClient_hb_rs_hbase-rs1,60020,1438092204868_-99326843_35, offset:
> 0, srvID: 6c25ffd4-3dc7-4e3a-af56-5cc8aa9220e0, blockid:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443, duration:
> 3600482062810
> 2015-07-28 16:03:44,169 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
> BP-439084760-10.32.0.180-1387281790961:blk_1075311707_1574443,
> type=LAST_IN_PIPELINE, downstreams=0:[] terminating
> 2015-07-28 16:11:10,961 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:7884ms (threshold=300ms)
> 2015-07-28 16:11:14,122 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:4362ms (threshold=300ms)
> 2015-07-28 16:11:14,123 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow flushOrSync took
> 3160ms (threshold=300ms), isSync:false, flushTotalNanos=3160364203ns
> 2015-07-28 16:13:29,968 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:659ms (threshold=300ms)
> 2015-07-28 16:18:33,336 INFO
> org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
> BP-439084760-10.32.0.180-1387281790961:blk_1075311717_1574453 src: /
> 10.0.0.20:41527 dest: /10.32.1.12:50010
> 2015-07-28 16:18:38,926 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:1767ms (threshold=300ms)
> 2015-07-28 16:28:40,580 WARN
> org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write
> data to disk cost:4099ms (threshold=300ms)
>
> What could be the cause of this?
> Thank you.
>
>
>